We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Manuel Morejón • 7 years ago

Great article.

Nathan Valentine • 7 years ago

Nice summary but I have a request...

Please don't use the ultra-small base images unless they have some sort of package management which can be used to pull in debugging tools should they be necessary. The worst part of container dev is docker-exec'ing/kubectl-exec'ing into a running container which is buggy and finding out it is based on busybox or lfs and there are no debugging tools available and no way to even get any without a special sidecar or some other dark arts. The time-cost of having dev/operations dealing with these situations *far* exceeds the bandwidth/decompression/build time savings of going too small.

Igor Sarcevic • 7 years ago

Nice feedback Nathan. Yes, we always keep our debugging tools ready. Of course, everything depends on the scope and the size of the service you are creating.

srcspider • 7 years ago

Instead of going though and-and-slash hell, its much easier to organize by simply creating install scripts then COPY / RUN them:

COPY installers/nginx /install/nginx
RUN /install/nginx

Easier to read and reason with in the Dockerfile itself too.

This makes a few other things easier as well. You can make yourself some helpful bash function and then at the top of each install script just include them:

#/usr/bin/env bash
. /build.system

pkg-install htop mysql
pkg-install-dev mysql

install-cleanup

Different system such as centos / ubuntu might use different package managers but in general they actually use the same names with the major difference being if you call it "yum install -y --blah-blah-flags package" vs "apt install -y --blah-blah-flags package" and that one might end dev packages with "-dev" while the other ends them with "-devel" you can easily mitigate the differences and parameter spam with simple pkg-install functions.

By running scripts you also get a nice view in docker history on what takes what.

If something is absolutely stupidly large (build dependencies for example) then it sometimes might be worthwhile to split the installs into multiple scripts and intentionally create more layers. Reason for this is that many tools will try to upload multiple at a time, so having more may make it a tiny bit faster (you're still pushing though the same pipe so it's not the greatest optimization).

Finally, if you want to save in the order of gigabytes (usually the case of compiled stuff), try prefixing them into some random directory (instead of /usr) then see what they actually produce that's useful and remove the rest in a cleanup step. Even for any regular package install (no matter how small) you should have a generic cleanup command at the end of your scripts (such as install-cleanup above) that clears the yum cache, clears /tmp, removes build directories, etc etc. This can make attrociously large layers turn into layers of only a few magabytes.

ps. "du -sh some_path" is your friend (gets size of a directory)

Keith Patrick • 5 years ago

I use the same docker file for all my builds:

# A generic DockerFile

ARG FROM
FROM ${FROM}

ARG FROM
ARG BUILD_ENV=default.env
ARG BUILD_SCRIPT=default.sh
ARG BUILD_CONFIG=config-default.sh
ARG UID=888
ARG USER=docker
ARG ROOT_SRC=root

#true - to continue and create image for manual inspection and debugging
#exit - on error for CI builds
ARG ON_FAIL=exit

USER 0
COPY ${ROOT_SRC}/. /

# Use a proper readable(?)/debuggable bash file!
RUN /bin/sh "/build/${BUILD_SCRIPT}" && echo "Build Success"|| echo "Build Failed" && $ON_FAIL 99

USER ${UID}

Kyle C. Quest • 7 years ago

Another option is to keep everything as-is and use DockerSlim (http://dockersl.im) to shrink the image... though I haven't tried it with Elixir projects yet :-)

Pascal van Kooten • 7 years ago

I really like this article! Great advices and well written.

Harshit Saini • 4 years ago

Beautiful Article. Loved reading it :)