Maybe you are in same boat I was – wanting a CI pipeline that allows you to push a change to master and have things “automagically” deploy to production behind the scenes. This may seem intimidating at first, I know it was for me. Docker, docker hub, docker cloud, and orchestration were all fairly new terms to me when I set out to dockerize my first web application and setup an end-to-end CI pipeline. In this post, I’ll be teaching you just how easy it is to dockerize a simple Hello World python web app and create a full end-to-end CI pipeline using docker hub, docker cloud, and digital ocean. The steps can be easily replicated for other languages/frameworks as well as other hosts like Amazon Web Services or Azure. When we’re done, our workflow will look something like the image below.
You’ve probably never built a hello world web application before…*dodges brick thrown by reader*…so let’s build one! In this example I’ll be using Flask – a lightweight web development framework. (http://flask.pocoo.org/) We’ll be serving the application using gunicorn – a Python WSGI HTTP Server. (http://gunicorn.org/) Enough of explaining, let’s get to doing!
First, let’s create the project folder and the first python file we’ll be using for this example. Run the following commands in your terminal:
$ mkdir helloworld $ cd helloworld
Create a new file helloworld.py with the following contents:
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!'
Create another new file wsgi.py with the following contents:
from helloworld import app if __name__ == "__main__": app.run()
This just created a simple hello world web application. The route() decorator in helloworld.py tells flask what url will trigger the hello_world() method. Before we can actually run anything, we will need to add our python dependencies. This is usually done by creating a requirements.txt file which will be used with pip (a python package manager) in the next few steps.
Create a new file requirements.txt with the following contents:
Now we can install our python dependencies using pip. It is recommended to use virtualenv at this point, but isn’t necessary. Check out http://docs.python-guide.org/en/latest/dev/virtualenvs/ if you haven’t worked with virutalenv before. Run the following command in your terminal:
$ pip install -r requirements.txt
Now we can deploy our helloworld flask project using gunicorn:
$ gunicorn --bind 0.0.0.0:5000 wsgi:app
If you are using Windows you will receive a “No module named fcntl” message. This is because gunicorn is incompatible with Windows. Not to worry, once we get things dockerized in the next section you will be able to see the working example.
We should now be able to browse to http://localhost:5000 to and see “Hello, World!” displayed on the browser. Stop gunicorn by pressing ctrl + c. Before we get started with docker, let’s first move that last gunicorn command into its own shell script start-gunicorn.sh:
#!/bin/bash echo Starting Gunicorn... exec gunicorn wsgi:app \ --bind 0.0.0.0:5000 \ --workers 3
Dockerizing Hello World
Next we’ll be dockerizing our hello world python web application. This involves 3 steps we take for any project we wish to use docker for.
- Creating a Dockerfile
- Building the Docker Image
- Running the Docker Container
The individual steps may vary from project to project. For our hello world example, we’ll be keeping it fairly simple. Let’s get dockerizing?…dockering?…dockered?
Dockerizing Step 1: The Dockerfile
The first step in dockerizing our hello world app is to add a ‘Dockerfile’ to our project. You can think of the Dockerfile as a list of instructions or steps telling docker how to build your image. A finished Dockerfile for our example should look something like this:
FROM python:2-onbuild COPY ./start-gunicorn.sh /start-gunicorn.sh RUN chmod 775 /start-gunicorn.sh EXPOSE 5000 CMD ["/start-gunicorn.sh"]
The first line of the docker file is often a FROM which tells docker what base image to build things off of. In our case we’ll be using the python base image. This image has been built to manage python applications and will execute steps to automatically install the python dependencies in requirements.txt. Awesome right?
Start by creating a file called Dockerfile in your project folder and add the first line:
Next we’ll add another step to copy the start-gunicorn.sh script created earlier into the image and set proper permissions on it. Add the following lines to the Dockerfile:
COPY ./start-gunicorn.sh /start-gunicorn.sh RUN chmod 775 /start-gunicorn.sh
The gunicorn server will be running on port 5000, therefore we must expose that port on the running container. This can be achieved by adding:
Finally, we want to run the start-gunicorn.sh script. To do this we add:
Our Dockerfile is now complete. We can proceed to building the image now.
Dockerizing Step 2: Building the Image
Now that we have our working web application and a Dockerfile, we’re ready to build our docker image. The docker build command will look for the Dockerfile we created earlier, and use its instructions to build an image with all the necessary pieces. Build your hello world image using the following command:
$ docker build -t helloworld .
The -t parameter tags our image so we can run it easily later. Do not miss the “.” in the command above, this tells docker to look in the current directory for the Dockerfile. After running the command you should see a lot of docker output as it builds the image. The last line should look something like:
$ Successfully built --Unique Identifier Here--
Dockerizing Step 3: Running the Container
Now that we have our image built, we can run the container. This can be done by running:
$ docker run -it -p 5000:5000 helloworld
From the docker documentation:
-it for interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process.
-p publish all exposed ports to the host interfaces.
Our docker helloworld container is now running. We should now be able to browse to http://localhost:5000 to and see “Hello, World!” displayed on the browser.
The Pipeline Step 1: Initial Setup
At this point you will need to do the following (if you haven’t already):
- Push your helloworld project to a branch on a repository on your GitHub account
- Create a blank helloworld docker repository (public is fine) at https://hub.docker.com/add/repository Don’t worry about pushing anything yet. We just need the repository to exist.
- Create a digital ocean account at https://cloud.digitalocean.com/registrations/new Skip this step if you are using another host. For the remainder of this post, I’ll be using Digital Ocean.
Navigate to https://cloud.docker.com/ and link your GitHub and Digital Ocean accounts by clicking Cloud Settings on the main navigation pane. You can find Digital Ocean under Cloud Providers and GitHub under Source Providers. If you wish to use another host, now’s the time to link it.
Bonus: at the time of writing this, Digital Ocean is offering a free $20 credit which you can earn by clicking on “$20 Code” beside Digital Ocean on Docker Cloud “Cloud Providers” section. Yay free!
The Pipeline Step 2: Automating the Image Build
Next we’re going to use Docker Cloud to setup an automatic build of our helloworld image on Docker Hub that will be triggered when a git push to master (or any branch you like) occurs.
In Docker Cloud, navigate to Repositories on the main navigation pane, select Docker Hub from the drop down and click on your helloworld repository.
Next click Builds then Configure Automated Builds. Under Build Configurations set the following:
- Source Repository: select your helloworld repository from your GitHub account.
- Build Location: Build on Docker Cloud’s Infrastructure using a: Large (this can be changed later if you like)
- Autotest: Off
Create a new Build Rule and set the following:
- Source Type: Branch
- Source: master (or your branch name)
- Docker Tag: latest
- Dockerfile location: Dockerfile
- Build Context: /
- Autobuild: On
- Build Caching: On
Click Save and Build
Docker Cloud will now build a new image on Docker Hub for helloworld. Going forward any time you push a change to master (or the branch you provided), a build will be automatically triggered and a new image will be built on Docker Hub.
The Pipeline Step 3: Automating the Deployment
First we’ll need a node to deploy to. For the sake of this post, let’s create one from scratch using Docker Cloud’s Digital Ocean integration. Start by clicking Node Clusters on the main navigation pane and then clicking Create. Set the following:
Labels: leave blank
Provider: Digital Ocean
Region: Toronto 1
Type/Size: 512MB [1 CPUs, 512 MB RAM]
Number of Nodes: 1
Click Launch Node Cluster. If you log into Digital Ocean you should see your new droplet (server) was created automatically by Docker Cloud.
Next we’ll create a service which will allow us to deploy our container to the newly created node. Click Services on the main navigation pane and click Create. Click the My Repositories icon and select your helloworld repository.
Set the following settings:
Service Name: helloworld
Add to Stack: None
Deployment Strategy: High Availablity
Deployment Constraints: Select everything on the dropdown
Sequential Deployment: Off
Autoredeploy: On (This is what triggers a re-deployment when a new Docker Hub image is created)
API Roles: None
Use the default settings for this section
Container port: 5000
Node port: 5000 (if you do not want to have to supply a port for the web application url, set this as 80)
Use the default settings for this section
Leave the default settings for this section. In our example, we didn’t pass any environment variables during the docker run command when running locally. If we did, you would configure them under this section.
Use the default settings for this section
Click Create and Deploy. Docker Cloud should now deploy/run your container on the node you created earlier. Once it has completed, you should be able to browse to the ip of the node and append port 5000. You should now see “Hello, World!” displayed on the browser.
Congratulations! You’ve dockerized your first application and built an end-to-end continuous integration pipeline for builds and deployments.
Thanks for reading, and please reach out to me if you have any questions!