How Syncano Used Docker to Simplify Their Development Process
A few months ago, we started using Docker with Syncano and pretty much fell in love. The first blog posts we wrote (Reasons Why We Use Docker, Getting Started with Docker and Make Your Docker Workflow Awesome With Fig.sh) gave you some reasons and tools to get started with Docker - now we're going to share how we use Docker ourselves (and why we can't live without it).
Docker greatly simplified our development process. Syncano is mostly written in Python, and setting up a working development environment used to take a few hours. Now, onboarding a new developer takes just a few minutes on Linux and half an hour on OSX. All they need to do is:
- Install Docker
- Install fig
- Clone our syncano-platform repo
- Type into shell:
Our stack is quite big and consists of five main components. Every component is run in a separate container, and containers are connected over the network and environment variables using Docker links.
Our development setup consisting of Docker containers is very similar to our production setup. This gives us confidence that the application will run the same way in both development and production environments. You can read more about why your development setup should be similar to your production setup here.
Testing and continuous integration
To make sure that bugs are caught early and code is maintainable, we test everything we write, and every branch in our git repository is tested on our Continuous Integration server, CircleCI. You can read what Continuous Integration is and why people use it here.
Our Continuous Integration workflow works as follows:
- Build container
- Run tests on container
- Collect metrics such as coverage and so on
- If tests are successful, we tag the image and push it to the Docker registry
- If the tested git branch is on the list of branches to deploy in CircleCi configuration, we deploy the new version of Syncano using the Docker image that we pushed in the step 4
The first two steps are executed with help of fig:
You can read more about using Fig here or setting up Docker with CircleCi on their documentation page. It's very easy!
We deploy exactly what we tested on our CI using the same image. For deployment, we use AWS Elastic Beanstalk.
Elastic Beanstalk can run different kinds of containers, including those in Docker's stack. It currently supports Docker 1.0 and Docker 1.2.
The best features of Elastic Beanstalk are:
- Easy autoscaling
- Application version control
- Good, predefined settings - you can utilize the power of AWS with very simple configurations
You can read a tutorial on deploying Docker containers with elastic beanstalk here.
However, we aren't 100 percent happy about our deployment setup. Elastic Beanstalk underutilizes Docker - it usually spawns only one Docker container per machine (which is a complete waste)! It also doesn't really support inter-container communication or failure detection - it only supports health checks. And sometimes it has some stability issues. For example, I had problems with already allocated ports and Docker not running on the host machine.
We're currently working on our CodeBox feature, which offers execution of custom code on our platform. CodeBox can completely eliminate the need for implementing a custom backend with our API. It's in alpha now, but some of our customers use it every day.
Each time our client "runs" CodeBox, a Docker container is created, code is executed inside the container, and the container is destroyed just after code execution. We have some predefined code platforms - currently we support node.js and python. Docker containers are great for this use case because they offer isolation and control over resources used by the executed code (memory, cores) and are pretty lightweight.
Using Docker in deployment gives us great confidence that we'll deploy the same app that we test. Thanks to ready-to-run Docker images, auto-scaling is fast and reliable.
Now that we've started using Docker, it's hard to imagine the Syncano platform without it!