You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Running containers in production requires a security-first mindset at every layer — from the images you build, through the runtime environment, to the network and access controls around your workloads. This lesson covers comprehensive security practices and operational best practices for running containers on AWS.
Container security is not a single concern — it spans multiple layers:
+--------------------------------------------------+
| Application Code |
|--------------------------------------------------|
| Container Image |
|--------------------------------------------------|
| Container Runtime |
|--------------------------------------------------|
| Host / Compute |
|--------------------------------------------------|
| Network |
|--------------------------------------------------|
| Access Control (IAM) |
|--------------------------------------------------|
| Data Protection |
+--------------------------------------------------+
Let's address each layer.
The fewer packages in your base image, the smaller the attack surface:
| Base Image | Size | Packages |
|---|---|---|
ubuntu:22.04 | ~77 MB | Hundreds of packages |
alpine:3.19 | ~7 MB | Minimal BusyBox utilities |
gcr.io/distroless/static | ~2 MB | No shell, no package manager |
Recommendation: Use alpine or distroless base images for production workloads.
Keep build tools out of your production image:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
Never run containers as the root user:
# Create a non-root user
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -s /bin/sh -D appuser
USER appuser
# Bad — unpredictable
FROM node:latest
# Good — reproducible
FROM node:20.11.1-alpine3.19
Use ECR image scanning to detect known vulnerabilities:
# Enable scan on push
aws ecr put-image-scanning-configuration \
--repository-name my-app \
--image-scanning-configuration scanOnPush=true
For enhanced scanning, enable Amazon Inspector:
Use Sigstore or AWS Signer to sign container images and verify them before deployment. This prevents tampered images from running in your cluster.
Prevent containers from writing to their filesystem:
"containerDefinitions": [
{
"name": "app",
"readonlyRootFilesystem": true,
"mountPoints": [
{ "sourceVolume": "tmp", "containerPath": "/tmp" }
]
}
],
"volumes": [
{ "name": "tmp" }
]
This blocks attackers from dropping malicious files into the container filesystem. Use volumes for any directories that need write access (e.g. /tmp).
By default, containers run with a set of Linux capabilities. Drop all unnecessary capabilities:
"linuxParameters": {
"capabilities": {
"drop": ["ALL"],
"add": ["NET_BIND_SERVICE"]
}
}
Set CPU and memory limits to prevent a single container from consuming all available resources:
For ECS task definitions:
"cpu": 256,
"memory": 512,
"memoryReservation": 256
For EKS Pod specs:
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
Implement health check endpoints in your application and configure them in your task definition or Kubernetes deployment:
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
Every container should have only the permissions it needs — nothing more.
| Role | Used By | Purpose |
|---|---|---|
| Execution role | ECS agent | Pull images from ECR, send logs to CloudWatch, retrieve secrets |
| Task role | Application code | Access AWS services (S3, DynamoDB, SQS, etc.) |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
IRSA maps Kubernetes service accounts to IAM roles, giving individual Pods fine-grained AWS permissions:
eksctl create iamserviceaccount \
--name my-app-sa \
--namespace default \
--cluster my-cluster \
--attach-policy-arn arn:aws:iam::123456789012:policy/MyAppPolicy \
--approve
Then reference the service account in your Pod spec:
spec:
serviceAccountName: my-app-sa
Never bake secrets into container images or pass them as plain-text environment variables in task definitions.
Store and rotate secrets securely:
aws secretsmanager create-secret \
--name my-app/database-url \
--secret-string "postgres://user:pass@host:5432/db"
Reference in ECS task definitions:
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:eu-west-2:123456789012:secret:my-app/database-url"
}
]
For less sensitive configuration:
aws ssm put-parameter \
--name "/my-app/api-endpoint" \
--type SecureString \
--value "https://api.example.com"
For EKS, use the AWS Secrets Manager CSI driver to mount secrets as files:
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-secret-provider
VPC (10.0.0.0/16)
├── Public Subnets
│ └── ALB, NAT Gateway
├── Private Subnets
│ └── ECS Tasks / EKS Pods
└── Isolated Subnets
└── RDS, ElastiCache
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.