Skip to content

You are viewing a free preview of this lesson.

Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.

Containers and Docker Recap

Containers and Docker Recap

Before diving into AWS container services, let's revisit the fundamentals of containers and Docker. This lesson provides a focused refresher on the concepts you'll need as we explore how AWS runs containers at scale.


What Is a Container?

A container is a lightweight, portable unit that packages an application together with everything it needs to run — code, runtime, libraries, system tools, and configuration files. Unlike virtual machines, containers share the host operating system's kernel, making them significantly smaller and faster to start.

+--------------------------------------------+
|           Container Stack                   |
|--------------------------------------------|
|  App A  |  App B  |  App C                  |
|  Bins/Libs  |  Bins/Libs  |  Bins/Libs      |
|--------------------------------------------|
|           Container Runtime (Docker)        |
|--------------------------------------------|
|             Host OS / Hardware              |
+--------------------------------------------+

Key Properties of Containers

Property Description
Isolated Each container has its own filesystem, network stack, and process tree
Portable A container image built on your laptop runs identically in the cloud
Ephemeral Containers are designed to be created and destroyed quickly
Immutable Images are read-only; changes create new images rather than modifying existing ones
Lightweight Containers share the host kernel and typically weigh megabytes, not gigabytes

Docker Architecture Refresher

Docker uses a client-server architecture:

+------------------+          +-------------------+
|   Docker CLI     |  REST    |   Docker Daemon   |
|   (docker)       | -------> |   (dockerd)       |
+------------------+   API    +-------------------+
                                     |
                           +---------+---------+
                           |                   |
                     +----------+        +----------+
                     | Images   |        |Containers|
                     +----------+        +----------+
                           |
                     +----------+
                     | Registry |
                     +----------+
  • Docker Daemon (dockerd) — the background service that builds, runs, and manages containers
  • Docker CLI — the command-line tool that sends REST API calls to the daemon
  • Docker Registry — a service that stores and distributes container images (Docker Hub is the default public registry)

Images and Layers

A Docker image is built from a Dockerfile — a text file containing step-by-step instructions:

FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Each instruction creates a layer. Layers are cached and shared between images, which speeds up builds and reduces storage.

Image Naming Convention

registry/repository:tag

# Examples:
docker.io/library/nginx:1.25       # Docker Hub official image
123456789012.dkr.ecr.eu-west-2.amazonaws.com/my-app:v1.2.3   # AWS ECR image

Essential Docker Commands

Command Purpose
docker build -t my-app:v1 . Build an image from a Dockerfile
docker run -d -p 8080:3000 my-app:v1 Run a container in detached mode with port mapping
docker ps List running containers
docker logs <container> View container stdout/stderr
docker exec -it <container> /bin/sh Open a shell inside a running container
docker stop <container> Gracefully stop a container (SIGTERM)
docker rm <container> Remove a stopped container
docker images List local images
docker push <image> Push an image to a registry
docker pull <image> Pull an image from a registry

Volumes and Networking

Volumes

Containers are ephemeral — when they are removed, any data written inside the container is lost. Volumes provide persistent storage:

# Named volume
docker run -v my-data:/app/data my-app:v1

# Bind mount (host directory)
docker run -v /host/path:/container/path my-app:v1

Container Networking

Docker provides several network drivers:

Driver Description
bridge Default — containers on the same bridge can communicate by name
host Container shares the host's network stack directly
none No networking — complete isolation
overlay Multi-host networking for Docker Swarm or distributed setups

Multi-Container Applications with Docker Compose

Docker Compose lets you define and run multi-container applications using a YAML file:

version: '3.8'
services:
  web:
    build: .
    ports:
      - "8080:3000"
    depends_on:
      - db
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/mydb

  db:
    image: postgres:16-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: pass

volumes:
  pgdata:

Run everything with a single command:

docker compose up -d

Why AWS for Containers?

Running Docker on a single machine works for development, but production workloads need:

  • Orchestration — scheduling containers across multiple hosts
  • Auto-scaling — adding or removing containers based on demand
  • Load balancing — distributing traffic across container instances
  • Service discovery — containers finding and communicating with each other
  • High availability — restarting failed containers and distributing across availability zones
  • Security — network isolation, IAM integration, secrets management
  • Monitoring — centralised logging and metrics

AWS provides managed services that handle all of these concerns so you can focus on your application code rather than infrastructure management.


The AWS Container Ecosystem

AWS offers several container services, each targeting a different level of abstraction:

Service What It Does
Amazon ECR A managed container image registry — your private Docker Hub on AWS
Amazon ECS A fully managed container orchestration service — runs and manages your containers
AWS Fargate A serverless compute engine for ECS (and EKS) — no EC2 instances to manage
Amazon EKS Managed Kubernetes — for teams already invested in the Kubernetes ecosystem

Over the remaining lessons, we will explore each of these in depth.


Best Practices to Carry Forward

As you move into AWS container services, keep these Docker best practices in mind:

  • Use minimal base images (e.g. alpine, distroless) to reduce attack surface and image size
  • Multi-stage builds to keep build tools out of production images
  • Pin image versions — never use latest in production
  • One process per container — keeps containers simple and composable
  • Log to stdout/stderr — AWS container services integrate with CloudWatch Logs when you follow this pattern
  • Health checks — define a health check in your Dockerfile or task definition so orchestrators can monitor container health
  • Don't store secrets in images — use environment variables, AWS Secrets Manager, or SSM Parameter Store

Summary

  • Containers package an application with its dependencies into a portable, isolated unit.
  • Docker provides the tooling to build, run, and share container images.
  • Images are built from Dockerfiles in layers, each of which is cached for efficiency.
  • Volumes provide persistent storage; Docker networking connects containers together.
  • Docker Compose orchestrates multi-container applications locally.
  • Production workloads require orchestration, scaling, and high availability — which is exactly what AWS container services provide.
  • In the next lesson, we will start with Amazon ECR, the managed registry where you will store your container images.