GeekFleet.Dev
DevOps

Complete Docker Tutorial: A Beginner's Guide to Containerization in 2025

Complete step-by-step Docker tutorial covering everything from installation to production deployment. Perfect for beginners wanting to master containerization.

PK
Punit Kumar
Senior DevOps Engineer
15 min read
#docker#containers#tutorial#devops#docker-compose#deployment#microservices

Complete Docker Tutorial: A Beginner's Guide to Containerization in 2025

Docker has revolutionized how we build, ship, and run applications. Whether you're a developer looking to streamline your workflow or a DevOps engineer architecting scalable systems, this comprehensive Docker tutorial will take you from zero to production-ready containerization.

What is Docker and Why Use It?

Understanding Containerization

Docker is a containerization platform that packages applications with their dependencies into lightweight, portable containers. Unlike virtual machines, containers share the host OS kernel, making them incredibly efficient.

Key Benefits:

  • Consistency: "It works on my machine" becomes "It works everywhere"
  • Scalability: Easy horizontal scaling of applications
  • Resource Efficiency: Lower overhead than VMs
  • Rapid Deployment: Start containers in seconds
  • DevOps Integration: Seamless CI/CD pipeline integration

Docker vs Virtual Machines

Feature Docker Containers Virtual Machines
Resource Usage Lightweight (MBs) Heavy (GBs)
Startup Time Seconds Minutes
Isolation Process-level Hardware-level
Portability Excellent Limited
Performance Near-native Overhead

Docker Installation Guide

Install Docker on Windows

  1. Download Docker Desktop

    # Visit https://www.docker.com/products/docker-desktop
    # Download and run the installer
    
  2. Enable WSL 2 Backend (Recommended)

    # Enable WSL feature
    dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
    
    # Enable Virtual Machine Platform
    dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
    
  3. Verify Installation

    docker --version
    docker run hello-world
    

Install Docker on macOS

# Using Homebrew
brew install --cask docker

# Or download from Docker website
# https://www.docker.com/products/docker-desktop

Install Docker on Linux (Ubuntu/Debian)

# Update package index
sudo apt-get update

# Install required packages
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

# Add user to docker group
sudo usermod -aG docker $USER

Docker Fundamentals

Understanding Docker Images

Docker Images are read-only templates used to create containers. Think of them as blueprints for your applications.

# Pull an image from Docker Hub
docker pull nginx:latest

# List local images
docker images

# Search for images
docker search nodejs

Working with Docker Containers

Containers are running instances of Docker images.

# Run a container
docker run -d --name my-nginx -p 8080:80 nginx:latest

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop a container
docker stop my-nginx

# Remove a container
docker rm my-nginx

Essential Docker Commands

Container Management

# Run container interactively
docker run -it ubuntu:20.04 /bin/bash

# Execute command in running container
docker exec -it container_name /bin/bash

# View container logs
docker logs container_name

# Monitor container stats
docker stats container_name

Image Management

# Build image from Dockerfile
docker build -t my-app:1.0 .

# Tag an image
docker tag my-app:1.0 username/my-app:latest

# Push image to registry
docker push username/my-app:latest

# Remove image
docker rmi image_name

Creating Your First Dockerfile

Basic Dockerfile Structure

# Use official Node.js runtime as base image
FROM node:18-alpine

# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application code
COPY . .

# Expose port
EXPOSE 3000

# Define default command
CMD ["npm", "start"]

Dockerfile Best Practices

1. Use Multi-Stage Builds

# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
CMD ["npm", "start"]

2. Optimize Layer Caching

# ✅ Good - Dependencies change less frequently
COPY package*.json ./
RUN npm install

# Copy source code last
COPY . .

# ❌ Bad - Invalidates cache on every code change
COPY . .
RUN npm install

3. Use .dockerignore

node_modules
npm-debug.log
.git
.gitignore
README.md
.nyc_output
coverage
.env

Docker Compose for Multi-Container Applications

What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications using YAML configuration.

Basic docker-compose.yml

version: '3.8'

services:
  web:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
    depends_on:
      - database
      - redis

  database:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:6-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Docker Compose Commands

# Start all services
docker-compose up

# Start in background
docker-compose up -d

# Stop all services
docker-compose down

# View logs
docker-compose logs

# Scale a service
docker-compose up --scale web=3

# Rebuild images
docker-compose build

Advanced Docker Concepts

Docker Networking

Default Networks

# List networks
docker network ls

# Inspect network
docker network inspect bridge

# Create custom network
docker network create my-network

# Run container on specific network
docker run --network my-network nginx

Container Communication

# docker-compose.yml
version: '3.8'

services:
  frontend:
    build: ./frontend
    networks:
      - app-network

  backend:
    build: ./backend
    networks:
      - app-network
      - db-network

  database:
    image: postgres:13
    networks:
      - db-network

networks:
  app-network:
  db-network:

Docker Volumes for Data Persistence

Types of Volumes

  1. Named Volumes (Recommended)
# Create named volume
docker volume create my-data

# Use in container
docker run -v my-data:/data nginx
  1. Bind Mounts
# Mount host directory
docker run -v /host/path:/container/path nginx
  1. tmpfs Mounts (In-memory storage)
docker run --tmpfs /tmp nginx

Docker Security Best Practices

1. Run as Non-Root User

FROM node:18-alpine

# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

# Switch to non-root user
USER nextjs

COPY --chown=nextjs:nodejs . .

2. Use Official Images

# ✅ Good - Official image
FROM node:18-alpine

# ❌ Avoid - Unknown source
FROM random-user/node:latest

3. Scan for Vulnerabilities

# Scan image for vulnerabilities
docker scan my-app:latest

# Use Docker Scout
docker scout cves my-app:latest

Production Deployment Strategies

Container Orchestration with Kubernetes

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:latest
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

Docker Swarm for Simple Orchestration

# Initialize swarm
docker swarm init

# Deploy stack
docker stack deploy -c docker-compose.yml my-app

# Scale service
docker service scale my-app_web=5

# Update service
docker service update --image my-app:v2 my-app_web

Health Checks and Monitoring

# Add health check to Dockerfile
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1
# docker-compose.yml health check
version: '3.8'
services:
  web:
    build: .
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Performance Optimization

Image Size Optimization

1. Choose Minimal Base Images

# ✅ Alpine - 5MB
FROM node:18-alpine

# ❌ Full Ubuntu - 180MB
FROM node:18

2. Multi-stage Builds

FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./
RUN npm ci --only=production
CMD ["node", "server.js"]

Container Performance Tuning

# Limit memory and CPU
docker run -m 512m --cpus="1.5" my-app

# Set restart policy
docker run --restart=unless-stopped my-app

# Use specific restart conditions
docker run --restart=on-failure:3 my-app

Troubleshooting Common Issues

Debug Container Issues

# Check container logs
docker logs --tail 50 container_name

# Execute shell in running container
docker exec -it container_name /bin/sh

# Inspect container configuration
docker inspect container_name

# Check resource usage
docker stats container_name

Common Error Solutions

1. Port Already in Use

# Find process using port
sudo lsof -i :8080

# Kill process
sudo kill -9 PID

2. Permission Denied

# Fix file permissions
sudo chown -R $USER:$USER /path/to/files

# Add user to docker group
sudo usermod -aG docker $USER

3. Out of Disk Space

# Remove unused containers
docker container prune

# Remove unused images
docker image prune

# Remove everything unused
docker system prune -a

Real-World Example: Full-Stack Application

Project Structure

my-fullstack-app/
├── frontend/
│   ├── Dockerfile
│   ├── package.json
│   └── src/
├── backend/
│   ├── Dockerfile
│   ├── package.json
│   └── src/
├── database/
│   └── init.sql
└── docker-compose.yml

Complete docker-compose.yml

version: '3.8'

services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    environment:
      - REACT_APP_API_URL=http://localhost:8000
    depends_on:
      - backend

  backend:
    build: ./backend
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql://user:password@database:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - database
      - redis

  database:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./database/init.sql:/docker-entrypoint-initdb.d/init.sql

  redis:
    image: redis:6-alpine
    volumes:
      - redis_data:/data

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    depends_on:
      - frontend
      - backend

volumes:
  postgres_data:
  redis_data:

Docker in CI/CD Pipelines

GitHub Actions Example

# .github/workflows/docker.yml
name: Docker Build and Deploy

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Login to DockerHub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: username/my-app:latest
          cache-from: type=gha
          cache-to: type=gha,mode=max

Conclusion

Docker has transformed modern application development and deployment. By mastering these concepts and best practices, you'll be able to:

  • Containerize any application with confidence
  • Build efficient, secure Docker images
  • Orchestrate multi-container applications with Docker Compose
  • Deploy to production with proper monitoring and scaling
  • Integrate Docker into CI/CD pipelines

Next Steps

  1. Practice: Start containerizing your existing projects
  2. Learn Kubernetes: For advanced orchestration
  3. Explore Docker Security: Implement security scanning
  4. Monitor Performance: Use APM tools with containers
  5. Study Production Patterns: Microservices, service mesh

Ready to implement Docker in your project? Contact me for personalized Docker consulting and accelerate your containerization journey.

Share this article

Related Articles

Continue your learning journey with these related posts