Docker Layer Overload

Use of filesystem layers is one of the primary reasons why I started to use Docker. As a programmer who uses Git all the time, the idea of being able to incrementally commit changes to a filesystem was very appealing. Though I found filesystem layers to indeed be very useful, I also found that things get out of hand when they are overused. This article explains the issues and how I deal with them.

Layer Benefits

In my opinion, the biggest benefit of using layers is re-use: more than one image can be built from the same parent. Re-use helps with operations: the update or reconfiguration of a common parent is done once, and the changes apply to all children. Without a common parent, such changes would have to be done for each container. Re-use also helps reduce storage and transmission requirements, allowing for Docker's fast deployment capabilities.

Another benefit of using layers is the persistence of history. In software revision control, persistence of history enables tracking of exactly how software has changed, and it can even be used to revert to an older version if there is a critical bug. Such usage is not common with Docker, where new revisions of images tend to be re-built. (History of how an image has evolved over time might be tracked by managing its Dockerfile with Git.) On the contrary, layers can provide a history of how a specific image was built. Such information can be displayed using the docker history command.

Use of tagged layers also allows for hierarchical organization. One can easily see how repositories relate with commands such as docker images --tree.

Layer Drawbacks

A very concrete issue with accumulating many layers is that there is a limit to how many you can have. The limit was originally 42 layers, as imposed by the aufs filesystem. Users ran into cases where they wanted more layers, so the limit has been increased. The current limit is 127 layers.

Layers also complicate filesystem access. Images are read-only, and only the container layer is read-write. What happens when you remove a file that exists on a lower layer? The removal is recorded on the container layer. The actual file on the lower level is not removed; it is merely hidden. What happens when you list the contents of a directory on a container that is build upon 100 other layers? A simple implementation would need to traverse each layer, from top to bottom, in order to get an accurate representation of the current state. Of course, performance can be improved by using branch cuts and caching, but additional layers incur additional cost regardless.

Layers also clutter the system. This may not be a pertinent issue for some people, but I am frustrated when numerous layers make it difficult for me grok the output of docker images --tree.

My Philosophy

I do not think that persistence of every command used to build an image is necessary for production, especially when the commands are in a Dockerfile anyway. When it comes to deploying applications, I am only interested in stable release states. In a production environment, I believe that each image should have one of the following purposes:

  • The image is used by more than one child image.
  • The image represents a release, not an intermediate step.

Notice that all images that fit into these categories should be tagged. Any image that is not tagged is a good candidate for removal.


I do not like how each command in a Dockerfile results in a new image. Combining RUN commands (using semicolons and parentheses) is a common tactic for combining the results into a single image, but even metadata commands such as MAINTAINER result in new images.

As an example, consider the following Dockerfile (adapted from docker-image-apt-cacher-ng):

FROM extellisys/debian-wheezy:latest
MAINTAINER: Travis Cardwell <>

RUN apt-get install -y apt-cacher-ng


CMD /usr/sbin/apt-cacher-ng ForeGround=1 CacheDir=/var/cache/apt-cacher-ng

Building this results in four images on top of the base: one that sets the MAINTAINER, one that contains the results of the RUN command, one that sets the EXPOSE metadata, and one that sets the CMD metadata. In the majority of cases, only the end result is desired, and the intermediate images are not needed.

I suggest that the docker build command be changed so that builds can be done without intermediate layers. I would like to be able to build the above Dockerfile and get only one new layer on top of the base image. Such functionality could be made optional via a command-line switch, and I would not be surprised if most users would prefer such behavior to be the default.

There are currently no utilities for management of layers that have already been created. I propose the creation of a new docker squash command that can be used to merge multiple layers into one. (The name comes from the git merge option.)


As a proof of concept, one can hack repositories by hand. An example of this in practice can be found in the extellisys/apt-cacher-ng repository documentation.