I have been quite interested in continuous deployment and working on ‘bakery’ process using OpenStack (which is another cloud environment comparable to AWS).
A traditional way of deployment is normally:
- pull source code from source management on production env
- create a tar file and then scp it & remote execute a script to untar
- create a debian or rpm package, and then remote install it
They are ok ways, but main issue with them seems to be ability to keep multiple environment consistent and slow deployment processes.
Often times, developers/ops find their application non-functioning after pushing to stage or production environment. Later they find some packages were missing. I’ve encountered such situations so many times in the past.
What improvements can be made to improve the consistency issue?
Well, virtual machines on openstack/AWS or even container image such as Docker can be created and then deployed instead to minimize human error as much as possible. With that notion, worst thing that can happen is some environment info is still development. However, every single package, application code, and config files are moving to one environment (dev) to another (stage or production). See? We are seeing the consistency here already.
What are cool things about having such system?
- Faster deployment
- Rollback is super easy because entire os including application code and dependent packages are already embedded to the virtual machine. Just create a new instance using previous virtual machine image and then put it in use. BAM! You have an old version of application up and running
So what are steps to make that happen?
To be frank, it is not simple to implement a such system. The very foundation is a well established continuous integration. On top of that automated functional/integration/regression tests are required as well (technically, they are not. However, they will make things smooth in the long run).
At a high level, these are the steps:
- New Code gets checked in
- Triggers CI job : compilation, unit tests, and packaging
- When everything looks good, a new virtual machine image (Openstack/AWS/Docker) with the latest application package on it gets created and called Bakery process.
- A new instance gets spun up using the newly made virtual machine
- functional/integration tests are done against the new instance
- When tests are all green, the new image gets promoted as production ready and wait for developer’s final OK
- Developer gives OK and the new instance gets spun up and goes under production load balancer starting to take live traffic
- At that point, old instance and new instance are both up and running. New instance is expected be completely backward compatible
- Developer kills old instance when the new instance does not cause any issues
In the next post, I will write about some details on step #1, #2, and #3.
UPDATE: Netflix tech blog posted the same thing pretty much today (Mar 9th, 2016) – http://techblog.netflix.com/2016/03/how-we-build-code-at-netflix.html