How IronSource Scales Node.js with Docker to Support Millions of Daily Users

[GUEST POST] ironSource is the world leading platform for software discovery, distribution, delivery and monetization. The solution consists of four cores – installCore, mobileCore, displayCore, and mediaCore – that connect software developers and users across platforms and devices.

The infrastructure department’s responsibilities include handling data received from all of our platforms for our own data analysis and billing. On any given day, we receive tens or even hundreds of millions of events from desktop computers, mobile phones, and web browsers. Our system comprises hundreds of servers supporting more than 5,000 concurrent connections.

In this article I will describe how we at ironSource scales Node.js with Docker – automatically build, deploy and run a Node.js application within a docker container to production.

IronSource Application Stack

We use AWS cloud extensively and really enjoy it. Our cloud set is pretty much “by the book” (our AWS advisor is quite happy): An Elastic Load Balancer (ELB) receives all the traffic, sending it to a bunch of stock ubuntu machines and naturally we use auto-scaling-group to scale the size of our cluster automatically.

We use a micro-services architecture where many separate applications communicate with each other. We try to restrict every service to only doing one thing. Our main development language is Node.js running on Linux. The services communicate with one another via a queue (RabbitMQ). a stream (Kinessis), or HTTP REST API calls.

We run short development cycles and practice continuous integration and delivery. Each time a developer initiates a merge to our staging branch, the Jenkins CI server detects the changes and starts the build process, which is followed by automation tests. If the merge is successful, the code is merged into the master branch and pushed into our production environment.

So Why Docker?

Currently, we use a pre “baked” image (AMI) that has pre-installed requirements to run the application. We use a bootstrap script that downloads the latest code from our private npm (Node.js Package Manager) repository and runs it. We w
anted to move to a fully automated provisioning cycle that would allow us to dynamically adjust the image for the application. Naturally, two candidates came to mind  Chef and Docker. Both Chef and Docker allow infrastructure to be run as code, but Chef costs in provisioning time for every new instance booting up. This happens quite often because we dynamically scale the amount of running servers we have according to changing demand. Docker only costs the provisioning time once during the build process. We like the idea of containers that run isolated via the Linux Kernel (soon Windows?) . So, with the latest hype and mass adoption of the technology, we decided to give Docker a try.

Our Container

The Docker container runs a Node.js application behind a Nginx reverse proxy. We use a process control system named Supervisor that runs our application within the container and acts as a watchdog in case the application crashes. Every Docker container receives an entry point, a process to run when the container starts. Supervisor is the entry point to our container. Nginx and Node.js applications are started via Supervisor.

We use the c3.large instance type for most of our applications. Due to the fact that this instance type has two CPU cores. We run two Node.js processes, listening on ports 8000 and 8001. Nginx listens on port 80 and acts as a reverse proxy to redirect traffic to the Node.js application instances on ports 8000-8001. We use Ubuntu 14.04 both for the host and the container.

The Magic: Amazon AMI bootstrap configuration

The bootstrap (user-data) process is pretty straightforward. We boot a stock ubuntu(14.04) image. Then we perform distro/security upgrades, Install the docker client, log in to the Docker repo, and launch our container exposing port 80  so simple!

 #AMI - c3.large - ubuntu 14.04 - user-data:
 #!/bin/bash
 sudo apt-get update
 sudo apt-get -y dist-upgrade
 sudo apt-get -y install docker.io
 sudo docker login -u <user> -p <password> -e <email>
 sudo docker pull user/container
 sudo docker run -p 80:80 -d user/container

GitHub for a technical spec and configs – https://github.com/ironSource/docker-config

Final Notes

Linux containers and Docker allow us to fully leverage our micro-services architecture. We no longer have to worry about different versions in dev and production. It allows us to build for load and scale from day one. Docker truly super-charged our development cycles.

Future developments

Some further research I am planning is to use a lightweight linux OS for the container and host (CoreOS?). IT Monitoring of the container is still foggy for me. I am considering installing a NewRelic IT agent on each container. I am also planning to test Amazon’s new container management service (ECS) to control the fleet load. I would also like to test this setup on Microsoft Azure’s platform.

we are recruiting! https://www.ironsrc.com/careers/


About the Author

Shimon Tolts

I am an infra geek, focusing on software development, mainly back-end at high load & scale. Building elastic cloud-based distributed systems. Currently, I am the R&D Manager of the Infrastructure Department @ ironSource.

Related posts