A lot of Docker related articles focus on the merits of using containers for local development. However, the question of how to move from local development to production is often left unanswered, or glossed over at best. At the other extreme we find a tremendous focus on container orchestration frameworks, Kubernetes being the prime example. An orchestrator brings many advantages when it comes to the deployment, scaling and monitoring of containers in production, but results in a huge deal of complexity and setup as well.

This article highlights a simple build process for the continuous deployment of single-container applications, including the option to (automatically) roll-back to a previous known-good state when needed. The instructions should be universally applicable, independent of the build server you might use.

Build Stage

The first step of the process is the build stage. The result of this stage will be a new version of the application packaged as a Docker image, uploaded to and stored in an image registry. The latter could be hosted by Docker Hub (used in this article), or a more general artifact manager such as Jfrog's Artifactory or Nexus by Sonatype.

In line with good CI/CD practice, we would like to have a trace from what has been deployed in production, all the way back to the commit in code that initially triggered the update. We should therefor ensure that (1) each build results in a distinct image and (2) a link exists between the image and our build system. To realize (2), we can include the build number in the image tag (for example: build-1, build-2...). Our build server will subsequently provide the link between the build and the code changes, as illustrated below.

Link each image with it's build using the image tag.

Given these requirements, the build stage should be triggered by each commit and results in the execution of the following 4 commands.

(1) Build the image from the Dockerfile and apply a local tag (i.e. blogpost-app):

$ docker build -t blogpost-app .

(2) Re-tag the image with a reference to the remote repository (i.e. windtunneltechnologies/blogpost-app) and the build number (build-1):

$ docker tag blogpost-app \
	windtunneltechnologies/blogpost-app:build-1

At least the tag suffix will need to be dynamic in this command. All build servers allow injecting this current build number, normally through an environment variable. When using Bamboo for example, this command becomes:

$ docker tag blogpost-app \
	windtunneltechnologies/blogpost-app:build-${bamboo.buildNumber}

(3) Log in into the repository with a user using having sufficient permissions to push the image:

$ docker login \
    --username=${docker_username} \
    --password ${docker_password}

(4) Push the image remote:

$ docker push windtunneltechnologies/blogpost-app

After triggering a number of builds, the image repository should look similar to:

Before moving on to the second stage, deployment, let's revisit some Docker concepts relevant for what follows.

Manifests and Tags

A Docker image is defined by it's Manifest, which in turn points to the different layers that make up the image filesystem. A Docker image registry can be seen as nothing more than a bucket for storing and serving these Manifest and layer files. To drive this point home: we can just use curl (or similar) to pull the Manifest file from the repository for the image created in the previous section.

The following instructions apply to the Docker Hub registry, but the same principle applies to all registries. Docker Hub requires the use of an intermediate JWT token for authorization, which makes this a (more involved) two-step process. Other registries may use basic authentication directly.

(1) Fetch a JWT token based on your Docker Hub account, scoped to the desired repository (windtunneltechnologies/blogpost-app in our example). The following command assumes jq has been installed so the token value can easily be extracted from the response:

TOKEN=$(curl -s --user "$docker_username:$docker_password" \
	"https://auth.docker.io/token?service=registry.docker.io&scope=repository:windtunneltechnologies/blogpost-app:pull,push" | jq -r '.token')

(2) Use the obtained token to fetch the Manifest file:

curl "https://registry-1.docker.io/v2/windtunneltechnologies/blogpost-app/manifests/build-1" \
    -H "Authorization:Bearer $TOKEN" \
    -H 'accept: application/vnd.docker.distribution.manifest.v2+json' \
    > "manifest-build-1.json"

The Manifest will now be stored locally in manifest-build-1.json:

$ cat manifest-build-1.json
{
   "schemaVersion": 2,
   "mediaType": "app.../vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "app.../vnd.docker.container.image.v1+json",
      "size": 1567,
      "digest": "sha256:1f06...e9de0d"
   },
   "layers": [
      {
         "mediaType": "app.../vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 2802957,
         "digest": "sha256:c9b1...0f71fc9"
      }
   ]
}

Looking at this Manifest, we see a 'digest' field in the 'config' section having value sha256:1f06...e9de0d. This is known as the image id and it uniquely identifies the image in terms of it's layers. Because working with a SHA256 hash quickly becomes unwieldy, Docker allows defining aliases for the image id in the form of tags. It is important to understand that the relation between tags and image id's (and thus Manifests) is not one-to-one, but rather many-to-one. Each image id might be pointed at by different tags, as illustrated below.

Tags 'build-2' and 'production' both point to the same image id.

We are now in a position to discuss the deployment stage.

Deployment Stage

A typical application deployment pipeline might involve several (isolated) environments, e.g. 'develop', 'acceptance' and 'production'. Deployment to the lowest environment could happen automatically as the result of each build, while deployment in higher environments can be triggered manually. To support multiple environments, we can introduce additional, stable tags in the image repository named after each environment we would like to deploy to. Performing a deployment to a certain environment can then be performed as follows:

  • Adjust the environment tag so that it points to the desired image id. As there is a one-to-one relationship between image id's and build tags (discussed in the build stage), this simply equals pointing the environment tag to the desired build number and thus application version.
  • Instruct the environment to (re-)pull the image associated with it's stable tag, and restart the container.

The principle is illustrated below. The schema depicts the situation where build-1 has been deployed in production, build-2 in acceptance and build-n in develop. There is no environment associated with build-3, so this application version is (currently) undeployed.

Suppose we would like to bring production on-par with acceptance, i.e. deploying build-2 in production. The schema whould change as follows:

Given this example, the deployment stage should execute the following commands.

(1) Download the Manifest file of the build we would like to deploy (tag: build-2). These commands have been illustrated in the previous section and will not be repeated here. The only required (dynamic) input for this step is the build number. Most build servers allow to input/override the value of a variable at the start of the build (manual deploy) or fetch the value from a previous stage (automatic deploy).

(2) Re-upload this Manifest but with the tag of the environment we would like to deploy to (production). The only required (dynamic) input for this step is the environment name.

$ curl -f -X PUT \
     "https://registry-1.docker.io/v2/windtunneltechnologies/blogpost-app/manifests/production" \
     -H "Authorization:Bearer $TOKEN" \
     -H 'content-type: application/vnd.docker.distribution.manifest.v2+json' \
     -d "@manifest.json"

(3) Pull the new version for the (production) image tag on the corresponding environment. How this is done might vary considerably based on the environment where the application is running, but will in general entail SSH'ing into the remote machine and executing the following commands:

$ docker login \
    --username=${docker_username} \
    --password ${docker_password}
$ docker pull windtunneltechnologies/blogpost-app:production

(4) Restart the application container on the environment.

$ docker restart blogpost-app

Rolling back

In case the application version deployed to an environment turns out problematic, rolling back is as easy as rolling forward. The process stays exactly as described, and the only input needed to roll back is again the desired build number and environment name to deploy to. Note that the roll-back can be automated as well if the deployment process has the means to assess application health post-deploy.

Conclusion

The advanced capabilities of container orchestration frameworks can be overkill for a number of applications: prototypes, internal tools, low-traffic apps... they often get away with single container deployments for a long time. This article shows how such applications can be continuously deployed using a few simple commands, while still reaping a number of benefits orchestration frameworks provide.