You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Networking is one of the most critical — and most complex — aspects of running Kubernetes in production. AKS offers multiple networking models, service types, and ingress options. This lesson covers how traffic flows in and out of your cluster, how pods communicate, and how to expose services to the internet securely.
AKS supports two primary network plugins:
Every pod gets an IP address from the Azure virtual network subnet. Pods are first-class citizens on the VNet.
| Aspect | Detail |
|---|---|
| Pod IPs | Assigned from the VNet subnet |
| Pod-to-VM communication | Direct — pods and VMs are on the same network |
| IP consumption | High — requires enough IPs for all pods + nodes |
| Performance | Best — no overlay encapsulation |
| Best for | Production clusters that need VNet integration |
az aks create \
--resource-group rg-aks \
--name my-aks-cluster \
--network-plugin azure \
--vnet-subnet-id /subscriptions/.../subnets/aks-subnet \
--service-cidr 10.0.0.0/16 \
--dns-service-ip 10.0.0.10
Pods get IPs from a private, overlay network (CIDR you define). Only node IPs come from the VNet subnet. This dramatically reduces IP consumption.
az aks create \
--network-plugin azure \
--network-plugin-mode overlay \
--pod-cidr 192.168.0.0/16
| Aspect | Azure CNI | Azure CNI Overlay |
|---|---|---|
| Pod IP source | VNet subnet | Overlay CIDR |
| VNet IP consumption | High | Low (nodes only) |
| Pod-to-VNet routing | Direct | NAT through node |
| Max pods per node | Limited by subnet | Up to 250 |
Pods get IPs from a separate CIDR with route tables. Not recommended for new clusters — use Azure CNI Overlay instead.
Services provide stable network endpoints for pods:
Accessible only within the cluster. Used for internal service-to-service communication.
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 80
targetPort: 8080
Other pods access this service at backend-api.default.svc.cluster.local or simply backend-api within the same namespace.
Creates an Azure Load Balancer with a public (or internal) IP:
apiVersion: v1
kind: Service
metadata:
name: frontend
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true" # internal LB
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- port: 443
targetPort: 8443
Exposes the service on a static port on every node. Rarely used directly — LoadBalancer and Ingress are preferred.
An Ingress controller manages HTTP/HTTPS routing for multiple services behind a single external IP address. Instead of one LoadBalancer per service, you have one LoadBalancer for the Ingress controller, which routes traffic based on hostname and path rules.
The most popular option:
# Install via Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress \
--create-namespace
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.