September 09, 2020
Reducing docker images' size
Oleg Dashevskii
Updated on September 30, 2020
At Plaid, all 80+ internal services operate as Docker containers. Containers are deployed; they are built and spawned locally during CI runs. Finally, developers use containers during their day-to-day work. While developing service X that depends on services Y and Z, an engineer will typically have Y and Z running as containers inside local Docker, even if X itself runs on the host machine.
This approach generally consists of much building, pulling, pushing, and cleaning up container images. That’s why image size comes into play. For a CI/CD machine running in the cloud, there’s no big difference between 50, 500, and 2000 MB sized images; all operations are mostly instantaneous. The high-speed office network also mitigates the difference between mega- and gigabytes. However, if you work from home (and thanks to COVID-19, all Plaid engineers do so now), downloading a huge image through corporate VPN on a flaky, overloaded connection may take an extra five to ten minutes. For obvious reasons, this is not desirable; imagine how frustrating having to wait when trying to ship an urgent fix can be!
We at Plaid have adopted a simple equation for Docker image sizes: MINIMAL = FAST. This is enough to get started on reducing image sizes.
The Size Formula
The math is simple:
Image = Base Image + Essential Files + Cruft (a.k.a. random, unneeded files)
Therefore, there are three strategies to make an image smaller:
1. Use a thinner base image
For example, the ubuntu:18.04 image is 64.2 MB. alpine:3.11 is 5.6 MB, and the distroless footprint is close to zero. For custom base images, we need to apply the formula recursively.
2. Make essential files smaller in size and count
Essential files cannot be removed. However, we could probably make them smaller or use other, leaner ways to reduce their size. Let’s imagine, for example, that we need to calculate "2+2” somewhere in a shell script.
For a Node developer, it’s a no-brainer:
Same for Python:
But Node or Python needs to be installed for this to work. They probably are on your Mac/Linux laptop, but for Alpine 3.11, this means additional 36 to 64 MB added to the image... But wait, can’t the shell itself do this? Even Alpine’s default shell can!
3. Get rid of cruft
With Docker, it’s very easy to unknowingly bloat your image. Every command creates a new layer, and all layers are saved separately. Therefore, if a big file is generated by one command and removed later in the Dockerfile, it will still add bloat to the size.
More broadly, we need to be aware of the exact purpose of the image. Only files that support this purpose are essential; everything else is cruft.
Case Study 1: A Node-Based Service
One of Plaid services uses Node and Typescript; its image used to be 2.82 GB. Let’s see if we can apply three strategies outlined above to reduce the size.
Base image
This image builds upon a custom base image whose size is 1.6 GB. To inspect the image, we can use the awesome dive tool:
The tool shows all the constituent layers and their respective sizes; it also allows us to browse any layer’s contents to see exactly which files were added (the latter part is not present in this screenshot).
It’s time to go over large layers to see if we can remove or downsize them. We identified layers with the following characteristics:
63 MB for ubuntu:18.04. Can we use Alpine?
494 MB for Ubuntu packages, including those needed for node and npm.
150 MB for the AWS CLI tool.
205 MB for the megabin-platform, Plaid’s “umbrella” CLI tool powering certain AWS and Docker workflows.
555 MB for another batch of packages used to run the service, including LLVM.
After some research, we found that:
We’d better stay with Ubuntu as our base OS image. Alpine is based on a different libc implementation (musl) which may potentially cause trouble due to subtle incompatibilities with glibc. Verifying correctness would take much effort.
AWS CLI isn’t being used anymore and can be removed.
Only a small portion of megabin-platform functionality is used by this service. A separate binary built for the specific use case is much smaller (50-60 MB).
LLVM packages had been added to support now-removed functionality. These can be safely removed.
Essential Files vs. Cruft
Now, let’s run dive for the service image itself:
What immediately draws attention is 984 MB worth of npm packages installed by make setup (it calls npm ci inside). Do we need all these packages?
The answer is: it depends. The service image is used in several different contexts:
Development.
Linting and unit testing.
Integration tests.
Production.
What’s essential in one context could be cruft in a different one! It became apparent that the production environment does not need packages that are listed under devDependencies in package.json. Furthermore, it doesn’t need the whole gcc-based build toolchain that we installed in the base image! Experiments showed that we could shave ~ 500 MB off the production image by omitting devDependencies and build-related OS packages.
But how can we differentiate contexts? Docker has a concept of multi-stage builds which is directly applicable to this situation. We decided to introduce three stages to our images:
“repo”: sources and dependencies.
“build”: “repo” compiled.
“production”: a stripped-down version of “build” only suitable for running in the production environment that will also be used for integration tests.
The new Dockerfile for our service looks roughly like this:
The base image used for production (named “base-image-without-build-toolchain” here) is inherited from ubuntu:18.04 separately. It’s similar to “base-image”, but it doesn’t install the toolchain.
Case Study 2: The Go Monorepo
Many of Plaid’s internal services are implemented in Go; these services all live in a single monorepo. For the monorepo, we build a single image. All services are compiled into a single binary called “megabin” (due to static linking used in Go, it’s more size-efficient to have one binary instead of a few dozen per-service ones).
Now, we will apply the same approach to analyzing the Go monorepo image.
Base image
The Dockerfile already includes two stages: development and production. They use different base images, so let’s analyze both.
This image (1.4 GB) builds upon golang:1.13.0-alpine3.10; its first 4 layers come from it. Other notable layers include:
A 686 MB layer with more packages. Do we really need them all?
A 71 MB layer for AWS CLI.
A 206 MB layer for megabin-platform (described in the previous section),
Here’s what we arrived at after digging deeper:
One of the packages, qt5-qtwebkit-dev, pulled many heavy dependencies due to being a development package; however, we didn’t even use this package. The wkhtmltopdf utility pulled qt5-qtwebkit, which was sufficient. This helped us save ~ 270 MB.
We can ditch AWS CLI.
We can replace megabin-platform with a smaller, more focused binary.
The same ideas were used for the production base image.
Essential Files
Diving into the development image itself:
The only notable addition is a 205 MB binary with all Go-based services and tools compiled in. Why is it so big? (With Go static linking, the size has a huge impact on the build time.) To gain some understanding, we used another awesome tool for Go developers, goweight:
Most of the packages here don’t originate from Plaid; they were pulled as transitive dependencies by packages implementing our internal Kubernetes (k8s)-related tools. We found a way to isolate these packages and were able to cut the binary size by 40%.
Cruft
Now let’s dive into the production image:
What becomes immediately obvious from this picture is that we are copying /bin/megabin-bootloader and /bin/megabin-platform twice! :(
270 MB from the 457 MB layer were already removed by replacing qt5-qtwebkit-dev with qt5-qtwebkit. But what’s taking up 413 MB?
It appears that copying the whole repo directory isn’t really a great idea: it contains temporary files and build artifacts not needed in production. By copying more selectively, we were able to save another ~ 150 MB.
Wrap-up and conclusion
Let’s review some key strategies:
1. Find the right moment for optimization work
To quote Don Knuth, “premature optimization is the root of all evil”. If your image is small, you probably don’t need to optimize it. But if you haven’t paid attention to your image contents for a long time, optimization work might be harder.
2. Use dive to look into your Docker images
With dive, it’s easy to find voluminous layers and look exactly at files that are being added in each layer.
3. Challenge packages that you use in your images
In base images, strive for the absolute minimum amount of packages. In app images, obtain a clear understanding of how each package is used. Document this understanding in Dockerfile comments.
4. If your image is used in different contexts, consider adding multiple stages
Development, CI and production contexts will probably have different requirements for packages and other files. Having a focused image for every context will help keep image size and build time minimal. Instead of using multiple Dockerfiles, implement multi-stage Docker builds.
5. Use goweight to analyze your Go binaries
If you see a huge Go binary (100+ Mb), consider checking it with goweight to see if there are any huge dependencies being compiled in. To understand why a certain package is being pulled, use the depth tool.
If you cannot eliminate certain packages, try splitting your binary into 2+ binaries. It’s possible that the one with heavy packages can be built less frequently.
6. Prevent image size regressions by adding monitoring
Often, trimming fat once isn’t enough. To prevent bloat from being suddenly added again at some point, set up monitoring. For example, the script below will fail if the image size exceeds the limit: