Featured image of post Container (6): Docker Issues and Bad Habits

Container (6): Docker Issues and Bad Habits

Some issues and bad habits I encountered while using Docker, and how I solved them.

Introduction

This is the sixth article about Docker containers. The links to other articles in this series are as follows:

After using Docker for a while, through discussions with ChatGPT, I gradually realized some misconceptions and bad habits I had regarding Docker’s operation logic. To prevent others from making the same mistakes, I decided to document these misconceptions and bad habits for reference. I will continue to update this article whenever I encounter new issues.

Misconceptions

1. The Relationship Between Docker and Containers

Initially, I thought Docker and containers were the same concept, but in reality, containers are a broader concept, and Docker is just one tool for implementing containerization. It is the emergence of Docker that has made containerization so simple and popular, making it a representative of containerization.

However, Docker is not the only containerization tool; there are other tools and platforms that can also achieve containerization, such as Podman, LXC, rkt, etc. Each has its own advantages and disadvantages and suitable scenarios, but Docker is undoubtedly the most popular and mature containerization tool at present. I had briefly used Podman when compiling Proton from source, but I did not delve into its usage methods and principles, so I will not elaborate on it here. If I use Podman or other containerization tools in depth in the future, I will publish related articles.

2. Installing Docker

When I first installed Docker, I used the command apt install docker, which installs the version of Docker from the Ubuntu official repository, which may not be the latest version. Moreover, this installation method may lack some features or functionalities because it is maintained by the Ubuntu community rather than Docker’s official team.

To use the latest Docker version and features, the recommended installation method is to use the Docker official community repository. For details, refer to the official documentation. Generally, the installation process is as follows (for Ubuntu):

  1. Uninstall old versions of Docker

    1
    
    sudo apt-get remove docker docker-engine docker.io containerd runc
    
  2. Install dependencies

    1
    2
    3
    4
    5
    6
    
    sudo apt-get update
    sudo apt-get install \
        ca-certificates \
        curl \
        gnupg \
        lsb-release
    
  3. Add the Docker official community repository

    1
    2
    3
    
    echo "deb [arch=$(dpkg --print-architecture) ] https://download.docker.com/linux/ubuntu \
        $(lsb_release -cs) \
        stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  4. Install Docker

    1
    2
    
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io
    

3. docker-compose vs docker compose

In the first article of this series, Container (1): Introduction to Container-Related Knowledge - Containerization, Docker, Docker Compose, Kubernetes / K8s, etc., I mentioned the difference between docker-compose and docker compose. At that time, my recommendation was to use docker-compose, which is an independent command-line tool. This was because I encountered issues when trying to create a container using docker compose, and I could only successfully create it using docker-compose. I don’t remember the specific reason for the issue, but I did face a problem.

Later, I discovered that docker-compose is not maintained by Docker’s official team, while docker compose is maintained as a Docker plugin. Additionally, docker compose has a faster update speed and supports more features. Therefore, I now recommend using docker compose instead of docker-compose.

Bad Habits

1. Docker Data Management

Previously, I did not understand the differences between various Docker data volumes, leading to messy data management. To avoid containers affecting each other when issues arise or to easily clean up unused containers, I created a separate partition for each container, storing all its data in that partition. The advantage of this approach is that it allows for easy cleanup of unused containers by simply deleting the partition. However, the downside is that it wastes a lot of space since each partition requires a certain amount of space for data, even if the actual data usage is minimal. For example, if a container is allocated 20GB but only uses 1GB, 19GB of space is wasted.

Later, after understanding the different data volumes in Docker (details can be found in the third article of this series, Container (3): Docker Best Practices Guide - Managing Data Volumes), I started using a new approach:

  • Place all Docker configuration files (such as docker-compose.yml and various config.yml files) in a single directory for easy management with git.
  • Store all persistent data required by containers on a large-capacity hard drive or partition, preventing excessive system space usage while avoiding space wastage.

2. Docker Image Management

In the past, while using Docker, I had developed the habit of using docker-compose.yml files to manage containers but habitually used the latest tag to pull the latest images. The downside of this approach is that over time, I would forget which version of the image I had pulled, and if a significant amount of time passed, updating could fail due to skipping multiple versions. Additionally, using the latest tag made it difficult to realize whether the image version had been updated.

So later, I started specifying the image version number in the docker-compose.yml file, for example:

1
2
3
4
version: '3'
services:
  nginx:
    image: nginx:1.23.3

This way, I can clearly see the current image version number being used. By comparing it with the official newly released image version number, I can easily determine how many versions behind I am. This allows for targeted updates when necessary.

The benefit of this approach is that it provides clarity on the current image version being used. By comparing it with the official newly released image version, I can easily see how many versions behind I am. This way, when updating, I can do so in a targeted manner.

Additionally, I have recently started using WUD (What’s Up Docker) to monitor container updates. For more details, please refer to the fifth article in this series, Container (5): Docker Best Practices Guide - Container Update Monitoring Tool WUD (What’s Up Docker).

3. Docker Permissions

When I first started using Docker, I used the version of Docker from the Ubuntu community repository, and I did not add my current user to the docker group during installation. As a result, I had to use the sudo command every time I wanted to run Docker commands. Later, when I started using docker-compose, I still needed to prepend sudo to run it correctly; otherwise, it would prompt for read/write permissions.

Eventually, I realized this issue and created a docker group, adding my current user to the docker group. This allowed me to run Docker commands without needing to use sudo.

1
2
sudo groupadd docker
sudo usermod -aG docker $USER

Remember to log out and log back in, or run

1
newgrp docker

After this, I could run docker or docker compose commands without needing to prepend sudo.

Licensed under CC BY-NC-SA 4.0
Last updated on Jun 27, 2025 00:00 UTC
comments powered by Disqus