Beyond docker run and docker build
Most developers know the basics. These 10 tips are the ones that separate "it works" Docker from "it works well in production" Docker.
1. Use Multi-Stage Builds
Multi-stage builds drastically reduce final image size by separating build tools from runtime:
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Runtime — only what's needed
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json .
USER node
EXPOSE 3000
CMD ["node", "dist/index.js"]
A Node.js app that's 900MB with build tools becomes 150MB in the runtime stage.
2. Order Dockerfile Layers for Cache Efficiency
Docker caches layers. A changed layer invalidates all subsequent layers. Put things that change rarely at the top:
# ✅ Good order: dependencies before source code
COPY package*.json ./ # Changes rarely
RUN npm ci # Cached unless package.json changes
COPY . . # Changes every build — but deps are already cached
# ❌ Bad order: source code before dependencies
COPY . . # Changes every build
RUN npm ci # Reinstalls every time!
3. Use .dockerignore
Tell Docker what not to send to the build context:
node_modules
.git
.next
dist
*.log
.env
.env.local
README.md
Without this, Docker sends your entire node_modules (often hundreds of MB) to the daemon on every build.
4. Run as Non-Root User
By default, Docker containers run as root. This is a security risk:
# Create a non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
# Switch to it
USER nextjs
Most base images (node:alpine, etc.) have a pre-existing non-root user:
USER node # exists in node:alpine already
5. Add Health Checks
Docker can restart unhealthy containers automatically if you define a health check:
HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
// Add a health endpoint to your app
app.get('/health', (req, res) => {
res.json({ status: 'ok', uptime: process.uptime() })
})
6. Pin Image Versions (With SHA)
# ❌ Bad: "latest" can change unexpectedly
FROM node:latest
# ✅ Better: pin the version tag
FROM node:20-alpine
# ✅ Best: pin to exact digest (immutable)
FROM node:20-alpine@sha256:abcdef1234...
Using digests ensures the exact same base image every build, forever.
7. Use Docker BuildKit
BuildKit is Docker's next-gen build engine — enabled by default in Docker 23+, but worth knowing:
# Enable BuildKit (older Docker versions)
DOCKER_BUILDKIT=1 docker build .
# Or set in Docker Desktop settings
BuildKit gives you:
- Parallel layer building
- Better caching
- Secrets mounting (no secrets in image layers)
# Mount secrets during build without storing in image
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc npm install
docker build --secret id=npmrc,src=$HOME/.npmrc .
8. Limit Resource Usage
# Limit CPU and memory to prevent one container from starving others
docker run \
--memory="512m" \
--cpus="0.5" \
myapp
In Docker Compose:
services:
api:
image: myapp
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
9. Use --init for Proper Signal Handling
Node.js doesn't handle SIGTERM well as PID 1. Use --init to run a proper init process:
docker run --init myapp
Or in Dockerfile:
# tini is a minimal init system
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "index.js"]
This ensures graceful shutdowns work correctly.
10. Clean Up Aggressively
# Remove stopped containers
docker container prune
# Remove unused images
docker image prune -a
# Remove unused volumes
docker volume prune
# Nuclear option: remove everything unused
docker system prune -a --volumes
# Check what's using space
docker system df
Docker's disk usage can grow silently into tens of gigabytes. Schedule docker system prune regularly.
Key Takeaways
- Multi-stage builds — split build and runtime, cut image size 5-10x
- Layer ordering — dependencies before source code for cache hits
.dockerignore— never sendnode_modulesto the build context- Non-root user — security baseline for all production containers
- Health checks — enable automatic restart of unhealthy containers
- Resource limits — prevent runaway containers from starving the host