You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Amazon ElastiCache is a fully managed in-memory data store and cache service. It supports two popular open-source engines — Redis and Memcached — and delivers sub-millisecond latency for real-time applications. ElastiCache sits between your application and your database, dramatically reducing read load and improving response times.
Databases are optimised for durability, not speed. Even a fast database query takes 1–10 milliseconds. For data that is read frequently but changes infrequently, an in-memory cache reduces latency to microseconds and lowers the load (and cost) on your primary database.
| Metric | Without Cache | With ElastiCache |
|---|---|---|
| Read latency | 1–10 ms | 50–500 μs |
| Database load | High | Reduced by 80–90% |
| Cost at scale | Higher DB instance size | Smaller DB + cache nodes |
| Application complexity | Lower | Slightly higher (cache invalidation logic) |
| Feature | Redis | Memcached |
|---|---|---|
| Data structures | Strings, lists, sets, sorted sets, hashes, bitmaps, HyperLogLog, streams | Simple key-value (strings) |
| Persistence | Yes (RDB snapshots, AOF logs) | No |
| Replication | Yes (primary + up to 5 replicas) | No |
| Multi-AZ failover | Yes (automatic) | No |
| Pub/Sub | Yes | No |
| Lua scripting | Yes | No |
| Multi-threaded | Single-threaded (uses I/O threads in Redis 6+) | Multi-threaded |
| Max item size | 512 MB | 1 MB (default), up to 50 MB |
| Cluster mode | Yes (data sharding across nodes) | Yes (client-side sharding) |
| Use case | Complex data, pub/sub, leaderboards, session store | Simple caching, multi-threaded scaling |
Recommendation: Choose Redis unless you have a specific reason to prefer Memcached (e.g., you need multi-threaded scaling for simple key-value caching and don't need persistence or replication).
Cluster Mode Enabled — 3 Shards
Shard 1: Primary (AZ-a) → Replica (AZ-b)
Shard 2: Primary (AZ-b) → Replica (AZ-c)
Shard 3: Primary (AZ-c) → Replica (AZ-a)
Hash slots: 0–5460 → Shard 1
5461–10922 → Shard 2
10923–16383 → Shard 3
The application checks the cache first. On a cache miss, it reads from the database, stores the result in the cache, and returns it.
App → Cache (miss) → Database → write to cache → return to app
App → Cache (hit) → return to app
Pros: Only requested data is cached; cache stays relatively small. Cons: Cache miss incurs extra latency (three round trips); stale data if the DB changes.
Every write to the database is also written to the cache.
App → write to Cache AND Database simultaneously
Pros: Cache is always up to date; no stale reads. Cons: Write penalty (two writes per operation); unused data may fill the cache.
Writes go to the cache first; the cache asynchronously flushes changes to the database.
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.