Understanding Docker Containers

In a previous article, we talked about Docker images but we could only use a small section to talk about Docker containers. Now, let’s go deeper.

Docker Containers

A Docker container is created from a Docker image. If a Docker image is a recipe, think of the Docker container as the dish that is prepared

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.

Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

So to actually take advantage of a Docker image, we have to run it. When we run a Docker image, we create a Docker container.

Running a Docker Image

The basic way of running a Docker image is through the docker run command.

Docker runs processes in isolated containers. A container is a process which runs on a host. The host may be local or remote. When an operator executes docker run, the container process that runs is isolated in that it has its own file system, its own networking, and its own isolated process tree separate from the host.

There is a full documentation that details all the options that you can add to the docker run command. We will just run through the ones that we use regularly.

In this article, we will use the AdonisJs image we created in the previous article. The stephenafamo/adonisjs image.

Specifying the Docker Image

The docker run command needs at least one argument, and that is the image you want to run. For example.

docker run stephenafamo/adonisjs

We can also specify the tag of the image we want to run. Like this.

docker run stephenafamo/adonisjs:1.0.0

To view the running containers, we use the docker ps command. If we want to see all containers, we use;

docker ps -a

docker ps

NOTE: To see all information regarding a container, we inspect it by running docker inspect container_name

Naming containers

When we list the running Docker containers, the last column is the name of the container. By default, Docker assigns a random string as the name of the container.

We can specify the name of the container using the --name flag. Like this;

docker run --name adonis stephenafamo/adonisjs:1.0.0

Now when we run docker ps we can see our container properly named.

docker ps named

Foreground or background

You would have noticed by now that when we ran the adonis container, it began executing in the terminal, you probably had to stop the process, or open a new terminal in order to check the containers. This is because the container was running in the foreground.

We can make our container to run in the background in a detached mode by adding the -d or --detach flag. Like this;

docker run --name adonis -d stephenafamo/adonisjs:1.0.0

If you do not need a container to persist, that is if you want Docker to automatically clean up the container and remove the file system when the container exits, you should run the container with the --rm flag.

docker run --name adonis --rm stephenafamo/adonisjs:1.0.0

Constraining resources

When you are running multiple Docker containers, you may decide to limit the amount of resources available to a specific container. By default, it will attempt to use the entire resources of the host. The full list is available here, let’s just look at the ones used frequently.


We can specify the amount of available memory by using the -m or --memory flag.

docker run -m

docker run --name adonis -d -m 4g stephenafamo/adonisjs:1.0.0

Memory and Swap

We can specify the amount of available memory with swap by using the --memory-swap flag.

docker run --memory-swap

docker run --name adonis -d -m 4g --memory-swap 8g stephenafamo/adonisjs:1.0.0

NOTE: You should always set the Memory limit when using Memoryswap limit


We can limit the number of cpus a container can use with the --cpus flag.

docker run --cpus

docker run --name adonis -d --cpus 2 stephenafamo/adonisjs:1.0.0

CPU Shares

Instead of manually keeping track of the number of cpus each container is using, we can use relative weights to different containers and allow Docker to limit the processing power appropriately.

We do this using the -c or --cpu-shares flag.

docker run --cpu-shares

docker run --name adonis -d --cpu-shares 2 stephenafamo/adonisjs:1.0.0


In the creation of a Docker image, some ports are usually exposed. This is done with the EXPOSE directive in Dockerfile.

We can expose additional ports using the --expose flag of the docker run command.

docker run --name adonis -d \
    --expose 80 --expose 443 \

We can then bind some ports on our host machine to ports in our docker container by “publishing” these ports.

If we add the -P (upper case) or the --publish-all flag to our docker run command, all the exposed ports will be published and assigned random ports on the host machine from a range of ports.

docker run --name adonis -d \
    --expose 80 --expose 443 \
    -P \

We can specify which ports we want to publish using the p (lower case) or the --publish flag.

docker run --name adonis -d \
    --expose 80 --expose 443 -p 80 \

We can further specify which port on the host should be mapped like this;

docker run --name adonis -d \
    --expose 80 --expose 443 \
    -p 32277:80 -p 23458:443 \

NOTE: If we specify a port in the docker container using the -p flag, we do not need to expose it because it will be automatically exposed if it was not previously exposed. So we can do something like this:

docker run --name adonis -d \
    -p 32277:80 -p 23458:443 \

Environmental Variables

A lot of Docker images make use of environmental variables to determine how the Docker container will behave when it is run. Also, we may need to specify evironmental variables for the application we intend to run within the container.

For example, the stephenafamo/adonisjs:1.0.0 images uses an environmental variable to add any flags to the adonis install command which it runs when the container is started.

We can easily set the environmental variable of the container when it is created using the -e or --env flag.

docker run --name adonis \
    -d \
    -e "adonisFlags=--slim --yarn" \

For the sake of security or collaboration, we may perfer to keep the environmental variables in a separate place. In that case we can use the --env-file flag to specify a file which contains the environmental variables.

docker run --name adonis \
    -d \
    --env-file /path/to/env/file \

NOTE: We can use the -e and --env-file flags multiple times to specify multiple enviromental variables or env files.


Docker volumes are invaluable when we need to persist data. Since all the data in a Docker container is destroyed whenever the container is removed, we need to use volumes in a lot of situations.

We will talk about Docker volumes in detail later. We can create volumes, attach them to one or multiple containers, and the data in the volume will persist even if the container is removed.

For the sake of simplicity in this article, we will simply mount a directory in our Docker container.

Mounting a directory is a good way to persist data because the files in that directory will remain even if the container is deleted.

We can mount a directory using the -v or --volume flag.

docker run --name adonis \
    -d \
    -v /path/to/mount/directory:/path/to/destination/in/container \


Several reasons can cause our Docker container to exit. We can set what happens when the container exits using the --restart flag. The options are “no”, “always”, “unless-stopped” or “on-failure”.

docker restart options

docker run --name adonis \
    -d \
    --restart always \


We can modify the network setting of a container using various flags. This is especially useful when you need to customise how your containers talk to each other. The right networking configuration allows us to do some really powerful things such as this. The full list of networking flags is available here.

docker run network settings

Networking in Docker is another area we will treat in more detail in a later article.


We have gone through how we can run a Docker container and some of the many ways we can customize the behaviour of our container.

I hope that this makes you even more comfortable with using Docker. I believe that it is a wonderful tool.

As always, I appreciate comments, suggestions and corrections.

Powered By Swish