Docker Best Practices: 16 Tips for Production-Ready Containers
dockerintermediate

Docker Best Practices: 16 Tips for Production-Ready Containers

Complete guide to Docker best practices for optimizing your Dockerfiles, securing your containers, and reducing image size. Practical examples and production tips.

Antoine C
9 min read
#docker#best-practices#dockerfile#security#performance#devops

Docker Best Practices: 16 Tips for Production-Ready Containers

Most Dockerfiles I review share the same issues: 2 GB images when 200 MB would suffice, 15-minute builds that could take 30 seconds, containers running as root for no valid reason.

The container works locally, tests pass, but once in production, problems pile up: excessive deployment times, security vulnerabilities, explosive memory consumption.

The difference between a Dockerfile that "works" and a production-ready Dockerfile often comes down to a dozen best practices. These practices are not complicated, but they require understanding how Docker works under the hood: layer system, build cache, process isolation.

In this article, you will discover the 16 essential Docker best practices to go from amateur images to professional containers. We will cover Dockerfile optimization, security, performance, and production patterns.

This article presents each practice concisely. To go further and practice on real environments, Train With Docker offers "Docker Best Practices" scenarios that guide you step by step through implementing each concept.

Dockerfile Optimization#

1. Use multi-stage builds#

Multi-stage builds are probably the most impactful technique to reduce your image size. The idea: separate the build environment from the runtime environment.

dockerfile
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]

The first stage contains everything needed to compile (TypeScript, devDependencies, sources). The second stage keeps only the bare minimum: compiled code and production dependencies.

Typical gain

A Node.js TypeScript project often goes from 1.2 GB to 150 MB with a well-configured multi-stage build.

2. Order your instructions to maximize cache#

Docker caches each layer. When an instruction changes, all subsequent instructions are rebuilt. The order of your instructions directly impacts your build speed.

dockerfile
# Bad: cache is invalidated on every code change
COPY . .
RUN npm ci
RUN npm run build

# Good: dependencies are cached separately
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

The rule: place elements that rarely change first (system dependencies, then packages, then source code).

3. Minimize the number of layers#

Each RUN, COPY, and ADD instruction creates a new layer. Combine related commands to reduce the number of layers.

dockerfile
# Bad: 3 layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean

# Good: 1 layer
RUN apt-get update && \
    apt-get install -y --no-install-recommends curl && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*
Ready to Try This Yourself?
Practice these Docker concepts in a real environment with hands-on scenarios.

4. Use a .dockerignore file#

The .dockerignore file works like .gitignore: it excludes files from the build context. Without it, Docker copies everything, including node_modules, .git, and your test files.

text
node_modules
.git
.gitignore
*.md
.env*
coverage
.nyc_output
dist
Warning

A large build context slows down every build, even if you don't explicitly copy these files. Docker still has to send them to the daemon.

5. Choose the right base image#

The choice of base image impacts final size, security, and available dependencies.

ImageSizeUse case
node:20~400 MBNever in production
node:20-slim~75 MBWhen you need system packages
node:20-alpine~50 MBDefault choice
gcr.io/distroless/nodejs20~188 MBMaximum security

Alpine uses musl libc instead of glibc. In 95% of cases, it works perfectly. For the remaining 5% (some native binaries), use -slim.

Google's distroless images are heavier than Alpine, but they contain no shell, no package manager, and no system tools. Only the runtime (here Node.js) and its dependencies are present. Result: less attack surface and no way for an attacker to execute commands in the container.

Container Security#

6. Never run as root#

By default, Docker containers run as root. This is a major security issue: if an attacker compromises your application, they have root privileges in the container.

dockerfile
FROM node:20-alpine

# Create a non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

WORKDIR /app
COPY --chown=nextjs:nodejs . .

# Switch user before CMD
USER nextjs

CMD ["node", "index.js"]
Verification

You can verify the current user with docker exec <container> whoami. If it returns root, you have a problem.

7. Limit Linux capabilities#

By default, Docker grants your containers a set of Linux capabilities (system permissions). Most are unnecessary for a standard application and represent a security risk.

bash
# Drop all capabilities and add only the necessary ones
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp

Most common capabilities:

CapabilityUsage
NET_BIND_SERVICEListen on a port < 1024
CHOWNChange file ownership
SETUID / SETGIDChange user/group
SYS_PTRACEDebug processes (avoid in prod)
Principle of least privilege

Always start with --cap-drop=ALL, then add only the capabilities your application actually needs. If your app crashes, the logs will tell you which capability is missing.

For Docker Compose:

yaml
services:
  app:
    image: myapp
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
Put Theory Into Practice
Apply what you've learned with interactive Docker scenarios and real environments.

8. Use minimal base images#

The more packages an image contains, the larger its attack surface. Google's distroless images contain only your application and its runtime dependencies, without shell or system tools.

dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build

FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app
WORKDIR /app
CMD ["index.js"]

The downside: impossible to docker exec for debugging. This is intentional. In production, you should not need a shell in your containers.

9. Scan your images for vulnerabilities#

Docker images can contain CVEs (known vulnerabilities) in their system packages or dependencies.

bash
# With Docker Scout (integrated in Docker Desktop)
docker scout cves <image>

# With Trivy (open source)
trivy image <image>

# With Snyk
snyk container test <image>

Integrate these scans into your CI/CD. A scan that discovers a critical vulnerability should block deployment.

10. Handle your secrets properly#

Never put secrets in your Dockerfile or image. They will remain in the layer history.

dockerfile
# NEVER: the secret stays in the image
ENV DATABASE_URL=postgres://user:password@host/db

# BETTER: use Docker secrets (build-time)
# The secret is temporarily mounted in /run/secrets/ during build
# It is never written to an image layer
RUN --mount=type=secret,id=db_url \
    export DATABASE_URL=$(cat /run/secrets/db_url) && \
    npm run migrate

# To build with this secret:
# docker build --secret id=db_url,src=.env .

# IN PRODUCTION: pass secrets at runtime
docker run -e DATABASE_URL=$DATABASE_URL myapp

For Docker Swarm and Kubernetes, use their native secrets mechanisms.

Performance and Image Size#

11. Clean caches in the same layer#

Package managers leave caches behind. If you clean them in a separate layer, the cache remains in previous layers.

dockerfile
# Python
RUN pip install --no-cache-dir -r requirements.txt

# Node.js
RUN npm ci --only=production && npm cache clean --force

# Debian/Ubuntu
RUN apt-get update && \
    apt-get install -y --no-install-recommends package && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*
Master Docker Hands-On
Go beyond theory - practice with real containers and orchestration scenarios.

12. Use COPY instead of ADD#

COPY does exactly what its name suggests: copy files. ADD does the same thing, but automatically extracts archives (.tar, .gz) and can download files from a URL. These implicit behaviors make the Dockerfile less readable and can introduce security vulnerabilities if a URL is compromised or if an archive contains unexpected files. Prefer COPY for its predictability.

dockerfile
# Prefer
COPY ./src /app/src

# Avoid (unless you need to extract an archive)
ADD ./src /app/src

Build Best Practices#

13. Add labels for metadata#

Labels help identify and document your images.

dockerfile
LABEL org.opencontainers.image.title="My Application"
LABEL org.opencontainers.image.description="Backend API"
LABEL org.opencontainers.image.version="1.2.3"
LABEL org.opencontainers.image.authors="[email protected]"
LABEL org.opencontainers.image.source="https://github.com/org/repo"

These labels follow the OCI specification and are recognized by Docker Hub, GitHub Container Registry, and other registries.

14. Implement health checks#

A health check allows Docker (and orchestrators) to verify that your application is actually working.

dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

Without a health check, Docker considers a container "healthy" as soon as it starts, even if the application crashes after 5 seconds.

15. Understand the difference between ENTRYPOINT and CMD#

  • ENTRYPOINT: defines the main command (rarely overridden)
  • CMD: defines default arguments (easily overridden)
dockerfile
# Recommended pattern
ENTRYPOINT ["node"]
CMD ["index.js"]

# Allows you to do:
docker run myapp                    # Runs node index.js
docker run myapp other-script.js    # Runs node other-script.js

Use the exec form ["cmd", "arg"] rather than the shell form cmd arg. The exec form allows better signal handling.

Logging and Observability#

16. Write your logs to stdout/stderr#

Docker automatically captures everything that goes to stdout and stderr. Don't write to log files.

javascript
// Good: stdout
console.log(JSON.stringify({ level: 'info', message: 'User logged in', userId: 123 }));

// Bad: file
fs.appendFileSync('/var/log/app.log', 'User logged in\n');

JSON format structures your logs for aggregation tools (ELK, Datadog, CloudWatch).

dockerfile
# Don't redirect logs to files
CMD ["node", "index.js"]

# Not this:
CMD ["node", "index.js", ">", "/var/log/app.log"]

Orchestration and Production#

Once your images are optimized, a few additional practices apply at runtime.

Limit resources to prevent a container from monopolizing the server:

bash
docker run --memory="512m" --cpus="0.5" myapp

Configure restart policies for resilience:

bash
docker run --restart=unless-stopped myapp

Use rolling updates with Docker Swarm or Kubernetes to deploy without service interruption.

Docker Swarm

If you are preparing for the DCA certification or want to dive deeper into Docker Swarm, Train With Docker offers practical scenarios with preconfigured clusters.

Conclusion#

These 16 Docker best practices cover the essential aspects: build optimization, security, performance, and production-readiness. Applied systematically, they transform amateur Dockerfiles into professional configurations.

The most important thing: understand why each practice exists. Multi-stage builds reduce size because they separate build and runtime. Cache works by layer, so order matters. Non-root containers limit the impact of a compromise.

This article skims each practice without going into implementation details. To get your hands dirty, Train With Docker offers "Docker Best Practices" scenarios that let you practice each concept on preconfigured environments, without installing anything locally.

Docker Best Practices: 16 Tips for Production-Ready Containers