You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
The previous lesson introduced the architecture of Azure Event Hubs. This lesson goes deeper into three critical areas: how partitions affect throughput and ordering, how consumer groups enable independent processing, and how Event Hubs Capture provides automatic archival to Azure Storage. Mastering these concepts is essential for building efficient, scalable streaming pipelines.
Partitions are the core scalability mechanism in Event Hubs. Every event published to an event hub is appended to exactly one partition.
The number of partitions is set when you create the event hub and cannot be changed after creation (on Standard and Basic tiers). On Premium and Dedicated tiers, partition count can be increased but not decreased.
Rules of thumb for choosing partition count:
Events are assigned to partitions in one of three ways:
| Method | How It Works | Use Case |
|---|---|---|
| Round-robin | No partition key; events are distributed evenly | Maximum throughput, no ordering needed |
| Partition key | Hash of the key determines the partition | Ordering within a key (e.g., all events from one device) |
| Explicit partition ID | Sender specifies the exact partition | Rarely used; reduces flexibility |
Events are ordered within a partition but not across partitions. If you need ordering for related events, use a consistent partition key:
// All events from device-01 go to the same partition
const batch = await producer.createBatch({ partitionKey: 'device-01' });
batch.tryAdd({ body: { timestamp: 1, reading: 22.5 } });
batch.tryAdd({ body: { timestamp: 2, reading: 22.7 } });
Each partition has throughput limits per tier:
| Tier | Ingress per TU/PU | Egress per TU/PU |
|---|---|---|
| Standard | 1 MB/s or 1,000 events/s (per TU, shared across partitions) | 2 MB/s (per TU) |
| Premium | 1 MB/s per PU per partition | 2 MB/s per PU per partition |
For Standard tier, throughput units are shared across all partitions. If you have 4 partitions and 1 TU, the aggregate ingress is still 1 MB/s.
A hot partition occurs when one partition receives disproportionately more traffic than others, usually because of a skewed partition key. Monitor partition-level metrics and choose partition keys with high cardinality to distribute load evenly.
Consumer groups enable multiple independent consumers to read the same event stream without interference.
Each consumer group maintains its own offset per partition — a pointer to the last event it read:
Partition 0: [ e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 ]
^ ^
| |
analytics: 5 alerting: 8
The analytics consumer group is at offset 5; the alerting consumer group is at offset 8. They read independently.
| Tier | Max Consumer Groups |
|---|---|
| Basic | 1 ($Default only) |
| Standard | 20 |
| Premium | 100 |
| Dedicated | 1,000 |
Common patterns for consumer groups:
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.