Skip to Content
DevOpsDocker Primer

Docker Primer

Docker is an essential tool of continuous delivery which you must master in order to have fully reproducible builds.

Docker makes it possible for you to version your infrastructure environment (including build and deployment environments) in a way that is highly predictable and recoverable.

With docker you can easily go back to any previous version of your environment with minimal effort.

This guide contains essential docker tips and tricks that I have learned during my career in embedded engineering. They not only apply to embedded engineering but to all levels of the software stack - from compiling embedded software, to test environments, to running and deploying java tools and deploying web servers.

Simple Docker File

FROM ubuntu:23.10 CMD ["echo", "Hello, World!"]

Build it and run it:

docker build -f Dockerfile.demo -t demo . docker run --rm demo

Using the --rm flag automatically removes the container when it exits. See docker run --help.

Finding Images

Docker.io is a place where others can publish images. You can search it from command line.

docker search node NAME DESCRIPTION STARS OFFICIAL AUTOMATED node Node.js is a JavaScript-based platform for s… 13383 [OK]

Running Containers

# run demo image in a new container docker run demo # run demo image in a container in terminal interactive mode and add command as # parameters to default entrypoint docker run -ti demo command # Execute command in a running container docker exec container command

Dockerfile Commands

  • FROM <image>: specifies the base image from which to build the new image. Can be used multiple times for multistage builds. scratch can be used to specify empty image.
FROM ubuntu:22.04 # commands FROM ubuntu:22.04 as base # commands for a new set of layers FROM scratch # copy build artifacts from base image to new image COPY --from=base /from/path /to/path

Base ubuntu image is at the moment around 71MB.

  • RUN: executes a command inside the container at build time and commits the result to the image.
FROM ubuntu:23.10 # Ensure apt doesn't pop up any configuration dialogs (good for CI builds) ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get install -qy vim CMD ["vim"]

Run this using docker build -f Dockerfile.demo -t demo && docker run -ti --rm demo

Since vim is an interactive command, you must use -t for terminal mode and -i for interactive. If you try running without these then your docker container will behave strangely to keyboard input when it is running an interactive command like vim.

The above image is 187MB. We can reduce this size to 147MB by cleaning up after the apt command.

RUN apt-get update \ && apt-get install -qy vim \ && apt-get clean \ && rm -rf /var/lib/apt/lists/*
  • ENTRYPOINT: configures the command that will run when the container starts.

Start shell on startup:

ENTRYPOINT ["/bin/bash"]

Run a nodejs application:

ENTRYPOINT ["node", "myapp.js"]
ENTRYPOINT ["/bin/bash", "-c"] # Default: CMD ["echo Welcome"]

Override default when running:

# Override "Welcome" with "Hello World" docker run --rm demo "echo Hello World"
  • CMD: provides default command line arguments for the ENTRYPOINT command. If user overrides these when invoking docker run then parameters specified with CMD option are ignored. Also if multiple CMD options are present, only last one takes effect.

Run a python application on startup:

ENTRYPOINT ["python"] # default app CMD ["myapp.py"]

Override default command line when running image:

docker run -ti --rm demo otherapp.py
  • LABEL: add metadata to the image using key/value pairs.
LABEL maintainer="Martin Schröder"
  • EXPOSE: indicates ports on which a container listens for connections. Docker uses this setting to automatically setup firewall rules that allow connections to specified port on the IP address of the running container.
EXPOSE 80
  • ENV: sets environment variables in the container
ENV MY_VARIABLE "Hello World" ENTRYPOINT ["bash", "-c", "echo $MY_VARIABLE"]

Here we use bash -c "<command>" because we want this interpretation to be parsed by the shell. Had we only called echo then we would not get shell expansion for the environment variable.

  • COPY: copies new files into the image from local context while excluding paths defined in .dockerignore file.
COPY source destination COPY --from=image-alias source destination
  • ADD: copies new files or downloads and extracts archives from remote URLs into the filesystem of the image.
ADD https://cdn.com/some-archive.tar.gz /path/inside/image
  • VOLUME: create a mount point with the specified name and mark it as holding externally mounted volume from host or from other containers. Any commands that modify files under the mount point after the VOLUME command will be discarded.
VOLUME /workdir # Multiple volumes: VOLUME ["/vol1", "/vol2"]

Add initial content to the volume:

COPY source /volume/ VOLUME /volume

To persist volumes between executions, you can create a dummy container and then reuse the volumes:

docker create -v /foo --name demo-data demo /bin/true docker run --volumes-from demo-data -ti demo # Modify content under the volume touch /foo/bar # Starting it a second time will still persist the same /foo folder docker run --volumes-from demo-data -ti demo ls /foo
  • USER: sets the username or UID to use when running the image and for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile.
USER user # Run as user during build RUN command # Switch back to root USER root
  • WORKDIR: sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it.
WORKDIR /workdir
  • ARG: defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.

You can define default values inside the docker file. This is convenient to use when you want to define package versions to install for example.

ARG RENODE_VERSION=1.2.3
  • ONBUILD: adds a trigger instruction to be executed at a later time, when the image is used as the base for another build.

First dockerfile:

FROM ubuntu:23.10 ONBUILD RUN echo "This will run later"

Second dockerfile:

FROM demo RUN echo "This is a build instruction"

When you build the image from the second dockerfile, the original RUN instruction specified in the ONBUILD line will run again.

  • STOPSIGNAL: specifies the signal that will be sent to the container upon exit.
#!/bin/bash trap 'quit=1' USR1 quit=0 while [ "$quit" -ne 1 ]; do printf 'Do "kill -USR1 %d" to exit this loop after the sleep\n' "$$" sleep 1 done echo The USR1 signal has now been caught and handled
COPY test ./ STOPSIGNAL SIGUSR1 ENTRYPOINT ["./test"]
docker run -ti --rm -name demo demo docker stop demo
  • HEALTHCHECK: this tells Docker how to test a container to check that it is still working. This can detect cases such as a web server that is stuck in an infinite loop and not responding to any more requests, even though the server process is still running.
HEALTHCHECK --interval=5m --timeout=3s CMD curl -f http://localhost:3000/ || exit 1
  • SHELL: Allows the default shell used for the shell form of commands to be overridden.
SHELL ["/bin/bash", "-c"]

.dockerignore

The .dockerignore file serves a similar purpose to the .gitignore in Git

When you build a Docker image, the Docker command will send the context that includes all files and directories located in the same directory as the Dockerfile by default.

This process can be very slow if you have for example build artifacts in your current folder, so you want to control what is sent to the docker daemon.

Using .dockerignore you can also prevent sensitive information from being added to the image. This can happen for example if you have commands that add everything in the current directory to the image.

.git *.md .env.* /build __pycache__

Efficient Caching

Bind Mounts

Instead of copy, you can mount current directory:

- COPY . . RUN --mount=type=cache,target=/root/.venv/pip \ + --mount=type=bind,target=. \ make

Docker Compose

Docker-compose allows you to define a complete multi-container setup of docker containers. It will start, stop and update multiple containers at the same time.

Simple Node JS Server

--- version: "3" services: app: image: "yourapp" container_name: app hostname: app ports: - "127.0.0.1:3000:3000" environment: - NODE_ENV=production restart: always depends_on: - mysql networks: - learnnet mysql: image: mysql:latest container_name: db hostname: db volumes: - ./mysql:/var/lib/mysql:z environment: MYSQL_ROOT_HOST: "localhost" MYSQL_ROOT_PASSWORD: "some password" restart: always networks: - appnet networks: appnet: driver: bridge
⚠️

If you do not specify ip on which to expose an open port, the port will be exposed globally. This is almost always not what you intend. Even if you setup firewall rules, docker will diligently remove them in order to make the port available from everywhere. Always specify the ip address for all exposed ports.

Pushing Images

To push images to docker hub you need to login and then you can just push the image. By default docker pushes images to docker.io.

docker login

To login in CI you can use this variant:

echo "your-token" | docker login --username "username" --password-stdin

If you want to push images to a custom registry, you should name the image using the full url of the image in the registry:

docker push registry.gitlab.com/some/image:latest

Push Docker Image Over SSH

Push image over ssh and load it on the other end:

docker save "image" | ssh -C root@host docker load

Volume Mounts

It is primarily through volume mounts that you expose local filesystem to the docker container environment.

Mount Current Directory

This will mount current directory under /code inside the image:

docker run -ti -v $(pwd):/code image

Mounting Devices

Using volume mounts you can even expose for example USB devices.

docker run -ti --privileged -v /dev/bus/usb:/dev/bus/usb image /bin/bash

There is also --device option that can be used without privileged mode:

docker run -t -i --device=/dev/ttyUSB0 ubuntu bash

Mounting SSH Keys

If you want to access hosts using the same ssh keys as you have on your local machine from inside a docker container then you can mount your .ssh directory inside docker:

docker run --rm -ti \ + -v ${HOME}/.ssh/:/home/user/.ssh \ image ssh host command

Container Statistics

docker stats container-name
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11b65242819f demo 2.18% 772KiB / 15.34GiB 0.00% 0B / 0B 184kB / 0B 2

Container Logs

docker logs container-name

Dockerized Makefiles

You can combine docker with makefiles to execute local build tasks inside docker images (note: a better approach is to develop inside a devcontainer but this is a separate topic).

IMAGE:=swedishembedded/workstation:latest ifeq ($(DOCKER),) RUN:=docker run --rm -ti \ -v $(PWD):/workdir \ -v $(HOME)/.ssh/:/home/user/.ssh \ -w /workdir/ \ -u $(shell id -u $(USER)):$(shell id -g $(USER)) \ $(IMAGE) else RUN:= endif some_target: $(RUN) echo "Hello World"

Execute the build in docker and without docker:

make # default with docker make DOCKER=0 # without docker

User ID

You can set user id inside docker to the same id as the user that runs docker. This will ensure that files created by the dockerized application will have the same user id as the user running the docker command instead of being set to user id 0 (root):

docker run --rm -ti image \ + -u $(id -u ${USER}):$(id -g ${USER}) \ command

Limiting Resources

You can limit the memory and CPUs that are accessible to the container:

docker run --memory=512m --cpus=2 demo

Another approach is to give the container a percentage of “cpu shares”. The maximum number is 1024:

docker run -d --name my-limited-app \ --memory=500m --cpu-shares=1024 \ my-app-image

Networking

You can run multiple containers on an isolated network:

# Create a network docker network create my-network # Run containers within the network docker run -d --name app --network=my-network app-image docker run -d --name db --network=my-network db-image

Building in CI

You can build, test and deliver your docker image in GitLab CI.

build: stage: build image: docker:latest tags: - docker-dind services: - docker:dind before_script: - docker login -u "$CI_REGISTRY_USER" -p "$CI_JOB_TOKEN" "$CI_REGISTRY" script: - docker build -t "$IMAGE:$CI_COMMIT_SHORT_SHA" . - docker run -ti $IMAGE:$CI_COMMIT_SHORT_SHA /path/to/self-tests - docker image tag "$IMAGE:$CI_COMMIT_SHORT_SHA" "$IMAGE:latest" - docker push "$IMAGE:latest" after_script: - docker rmi "$IMAGE:$CI_COMMIT_SHORT_SHA" || true

Cleanup

Docker will create volumes and retain old images as you download newer versions of them. You can cleanup your system with the following commands.

# cleanup old containers docker system prune # Remove also dangling volumes (will lose data stored in them) docker system prune --volumes

Summary

This article should give you a basic understanding of working with docker and how to start integrating it into your daily workflow.

Martin SchröderMartin Schröder
16 years  of experience

Contact Martin

Fill in your details below to get in touch.

Last updated on