Databases

Redis vs Memcached: Which Cache Should You Use?

A hands-on comparison of Redis and Memcached covering performance benchmarks, data structures, persistence, managed service pricing, and when to pick each cache for your workload.

A
Abhishek Patel13 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Redis vs Memcached: Which Cache Should You Use?
Redis vs Memcached: Which Cache Should You Use?

The Cache Decision That Shapes Your Architecture

Every backend developer eventually faces the same question: Redis or Memcached? Both are in-memory key-value stores. Both are fast. Both have been battle-tested for over a decade in production at massive scale. But they're fundamentally different tools with different philosophies, and picking the wrong one creates technical debt that's expensive to unwind.

I've run both in production since 2014 -- from single-node Memcached powering a PHP app to 60-node Redis Cluster handling 2 million ops/sec for a fintech platform. The right choice depends on your data model, persistence requirements, and operational complexity tolerance. Here's how to decide.

What Is In-Memory Caching?

Definition: In-memory caching stores frequently accessed data in RAM instead of reading it from disk-based databases on every request. By keeping hot data in memory, caches reduce response times from milliseconds (database round-trip) to microseconds (memory read), and dramatically lower load on your primary datastore. Redis and Memcached are the two dominant open-source in-memory caching systems.

Caching isn't optional at scale. A PostgreSQL query that takes 15ms under light load might take 200ms at 10,000 concurrent connections. Put a cache in front of it, and 95% of those requests return in under 1ms. The question isn't whether to cache -- it's which cache fits your workload.

Redis vs Memcached: Feature Comparison

This table covers the differences that actually matter in production. I've left out trivia and focused on what drives architectural decisions.

FeatureRedis 7.4 (2024)Memcached 1.6.x
Data structuresStrings, hashes, lists, sets, sorted sets, streams, bitmaps, HyperLogLog, geospatial indexesStrings only (key-value blobs)
Max value size512 MB1 MB (default, configurable)
PersistenceRDB snapshots, AOF log, or bothNone -- pure cache
ReplicationBuilt-in primary-replica replicationNone (client-side sharding only)
ClusteringRedis Cluster (automatic sharding across nodes)Client-side consistent hashing
Threading modelSingle-threaded command execution, multi-threaded I/O (since 6.0)Multi-threaded from the start
Pub/SubBuilt-in pub/sub and StreamsNot available
Lua scriptingFull Lua scripting, Functions API (7.0+)Not available
TTL granularityPer-key, millisecond precisionPer-key, second precision
Memory efficiencyHigher overhead per key (metadata, pointers)Slab allocator -- more memory-efficient for uniform-size values
ProtocolRESP3 (Redis 6+)ASCII and binary protocols
TLSNative TLS support (since 6.0)TLS support (since 1.5.13)

The feature gap is enormous. Redis is a data structure server that can act as a cache. Memcached is a cache, period. That simplicity is either a limitation or a feature, depending on your needs.

Performance: Raw Throughput Numbers

Performance benchmarks are tricky because both systems are so fast that the bottleneck is almost always the network, not the cache. That said, here are realistic numbers from my own benchmarking on c6g.xlarge instances (4 vCPU, 8 GB RAM) running Amazon Linux 2023:

OperationRedis 7.4Memcached 1.6.29
GET (1 KB values)~310,000 ops/sec~350,000 ops/sec
SET (1 KB values)~290,000 ops/sec~330,000 ops/sec
GET (pipeline 50)~1.8M ops/secN/A (no pipelining)
Multi-GET (100 keys)~85,000 batches/sec~95,000 batches/sec
P99 latency~0.3ms~0.2ms

Memcached wins on raw single-key throughput by 10-15% thanks to its multi-threaded architecture. It fully uses all CPU cores for request processing, while Redis processes commands on a single thread (though Redis 6+ offloads I/O to threads). For simple GET/SET workloads with uniform value sizes, Memcached squeezes out more ops per dollar.

But Redis's pipelining capability changes the math. Pipelining batches multiple commands into a single round-trip, which pushes Redis well past Memcached on aggregate throughput. If your application can batch operations, Redis is faster in practice.

Pro tip: Enable Redis's io-threads configuration on machines with 4+ cores. Set io-threads 4 and io-threads-do-reads yes in redis.conf. This offloads socket reads and writes to background threads and typically improves throughput by 50-100% for network-bound workloads. It's been stable since Redis 6.2.

Code Examples: Redis vs Memcached Commands

Here's how common caching operations differ between the two systems.

Basic Key-Value Operations

# Redis -- SET and GET with TTL
redis-cli SET user:1001:profile '{"name":"Alice","plan":"pro"}' EX 3600
redis-cli GET user:1001:profile

# Memcached -- set and get with TTL
echo 'set user:1001:profile 0 3600 38' | nc localhost 11211
echo '{"name":"Alice","plan":"pro"}'
echo 'get user:1001:profile' | nc localhost 11211

Atomic Counters

# Redis -- atomic increment with expiry
redis-cli SET rate:api:192.168.1.1 0 EX 60
redis-cli INCR rate:api:192.168.1.1

# Memcached -- atomic increment (must initialize first)
echo 'set rate:api:192.168.1.1 0 60 1' | nc localhost 11211
echo '0'
echo 'incr rate:api:192.168.1.1 1' | nc localhost 11211

Data Structures (Redis Only)

# Sorted set for leaderboard
redis-cli ZADD leaderboard 9500 "player:42"
redis-cli ZADD leaderboard 8700 "player:17"
redis-cli ZADD leaderboard 9800 "player:99"
redis-cli ZREVRANGE leaderboard 0 9 WITHSCORES

# Hash for structured objects (no serialization needed)
redis-cli HSET user:1001 name "Alice" plan "pro" logins 47
redis-cli HINCRBY user:1001 logins 1
redis-cli HGET user:1001 plan

# Stream for event log
redis-cli XADD events '*' type "purchase" user_id "1001" amount "49.99"
redis-cli XADD events '*' type "signup" user_id "1002" source "organic"
redis-cli XRANGE events - + COUNT 10

This is the fundamental divide. Memcached gives you a fast key-value store. Redis gives you a programmable data structure server. If you need leaderboards, queues, pub/sub, or session storage with field-level access, Redis is the only choice. If you're caching serialized JSON blobs, Memcached is simpler and slightly faster.

Application-Level Caching (Node.js)

// Redis with ioredis
import Redis from 'ioredis';
const redis = new Redis({ host: 'cache.internal', port: 6379 });

// Cache-aside pattern
async function getUser(id: string) {
  const cached = await redis.get(`user:${id}`);
  if (cached) return JSON.parse(cached);

  const user = await db.users.findUnique({ where: { id } });
  await redis.set(`user:${id}`, JSON.stringify(user), 'EX', 3600);
  return user;
}

// Memcached with memjs
import memjs from 'memjs';
const mc = memjs.Client.create('cache.internal:11211');

async function getUser(id: string) {
  const { value } = await mc.get(`user:${id}`);
  if (value) return JSON.parse(value.toString());

  const user = await db.users.findUnique({ where: { id } });
  await mc.set(`user:${id}`, JSON.stringify(user), { expires: 3600 });
  return user;
}

When to Choose Redis

Redis is the right choice for the majority of new projects. Here's when it's clearly better:

  1. You need data structures beyond strings -- sorted sets for rankings, lists for queues, hashes for structured objects, streams for event logs. Memcached can't do any of this.
  2. You need persistence -- Redis can persist data to disk using RDB snapshots (point-in-time) or AOF (append-only file for durability). This makes Redis viable as a primary datastore for certain use cases, not just a cache.
  3. You need pub/sub or messaging -- Redis Pub/Sub and Streams replace the need for a separate message broker in many architectures.
  4. You need atomic transactions -- Redis MULTI/EXEC and Lua scripting let you execute complex operations atomically. Rate limiters, distributed locks, and inventory systems depend on this.
  5. You need replication and high availability -- Redis Sentinel provides automatic failover. Redis Cluster provides automatic sharding with replication. Memcached has neither.

When to Choose Memcached

  1. Simple cache-aside with uniform value sizes -- if you're only caching serialized objects (HTML fragments, query results, API responses), Memcached's slab allocator is more memory-efficient and its multi-threaded architecture delivers higher throughput per node.
  2. You want minimal operational complexity -- Memcached has no persistence, no replication, no clustering logic. It's a hash table in RAM. There's nothing to configure, nothing to monitor beyond memory usage. This simplicity is valuable.
  3. You're scaling horizontally with consistent hashing -- Memcached nodes are truly independent. Adding or removing nodes is trivial. There's no cluster state, no rebalancing, no split-brain risk. Your client library handles the distribution.
  4. Memory efficiency is critical -- for workloads with millions of small keys, Memcached uses less memory per key than Redis. Redis stores additional metadata (type info, encoding, LRU data) for every key.

Warning: Don't pick Memcached "for simplicity" and then bolt on a separate message queue, a separate session store, and a separate pub/sub system. You'll end up operating four services instead of one Redis instance that handles all of it. Simplicity means fewer moving parts in total, not fewer features per component.

Managed Service Pricing (2026)

Running your own cache nodes is cheap but operationally expensive. Here's what the managed options cost for a production-grade setup (2 nodes, high availability, 13 GB combined memory):

ServiceEngineConfigurationMonthly Cost
Amazon ElastiCache (Redis OSS)Redis 7.12x cache.r7g.large (13 GB)~$390
Amazon ElastiCache (Memcached)Memcached 1.62x cache.r7g.large (13 GB)~$360
Amazon MemoryDBRedis-compatible2x db.r7g.large (13 GB)~$490
Amazon ElastiCache ServerlessRedis/MemcachedPay-per-use (est. moderate load)~$200-$600
Google Memorystore (Redis)Redis 7.2Standard tier, 13 GB~$410
Google Memorystore (Memcached)Memcached 1.62x 6.5 GB nodes~$320
Azure Cache for RedisRedis 6.0Premium P2 (13 GB)~$490
Redis Cloud (Redis Ltd)Redis 7.4Pro, 12 GB, multi-AZ~$370
Upstash RedisRedis-compatiblePay-per-request, 10 GB~$60-$200
Aiven for RedisRedis 7.2Business-4, 13 GB~$380

Amazon MemoryDB is the most expensive but provides the strongest durability guarantee -- it's a Redis-compatible database with multi-AZ transaction log replication, not just a cache. If you need a durable Redis-compatible store, MemoryDB is worth the premium over ElastiCache. Upstash is compelling for low-to-medium traffic workloads because you pay per request, which can be dramatically cheaper than provisioning reserved nodes.

Pro tip: If you're on AWS, use ElastiCache Serverless for development and staging environments. You pay only for what you use, and there's no node management. For production, reserved nodes with 1-year commitment cut ElastiCache costs by 30-40%.

How to Set Up Redis Caching (Step by Step)

Here's a practical setup for a production Redis cache on a Linux server.

Step 1: Install Redis 7.4

# Ubuntu/Debian
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt-get update
sudo apt-get install redis-server

Step 2: Configure for Production

# /etc/redis/redis.conf -- key settings
maxmemory 4gb
maxmemory-policy allkeys-lru
io-threads 4
io-threads-do-reads yes
bind 10.0.0.5
protected-mode yes
requirepass YOUR_STRONG_PASSWORD
rename-command FLUSHALL ""
rename-command FLUSHDB ""

Step 3: Enable Persistence (Optional)

# RDB snapshot every 60 seconds if 1000+ keys changed
save 60 1000
# AOF for stronger durability
appendonly yes
appendfsync everysec

Step 4: Set Up Monitoring

# Key metrics to track
redis-cli INFO memory | grep used_memory_human
redis-cli INFO stats | grep keyspace_hits
redis-cli INFO stats | grep keyspace_misses
redis-cli INFO clients | grep connected_clients

# Calculate hit rate
# hit_rate = keyspace_hits / (keyspace_hits + keyspace_misses)
# Target: > 95% for a healthy cache

Step 5: Implement Cache-Aside in Your Application

The cache-aside pattern (also called lazy loading) is the most common caching strategy. Your application checks the cache first, falls back to the database on a miss, and populates the cache for subsequent requests. Set appropriate TTLs based on how stale the data can be -- 60 seconds for user profiles, 300 seconds for product catalogs, 3600 seconds for static configuration.

Frequently Asked Questions

Is Redis faster than Memcached?

For simple GET/SET operations with small values, Memcached is 10-15% faster per node due to its multi-threaded architecture. But Redis's pipelining, data structures, and Lua scripting often make it faster at the application level because you can do more work per round-trip. In practice, both deliver sub-millisecond latency, and the network is the bottleneck long before either engine hits its limit.

Can Redis replace Memcached as a drop-in swap?

For basic key-value caching, yes. Redis supports all the same operations -- GET, SET, DELETE, increment, TTL. You'll need to change your client library and connection code, but the caching logic stays the same. Many teams migrate from Memcached to Redis specifically to gain access to data structures and persistence without changing their caching patterns.

Does Memcached support persistence?

No, and that's by design. Memcached is a pure cache -- when it restarts, all data is gone. This is actually a feature for strict caching use cases because it eliminates the operational complexity of persistence (disk I/O, backup management, recovery procedures). If your cache is truly disposable and rebuilds from the primary database, Memcached's lack of persistence is a simplification, not a limitation.

Should I use Redis as my primary database?

For specific use cases, yes. Session stores, real-time leaderboards, rate limiters, job queues, and feature flags work well as Redis-primary workloads, especially with AOF persistence and replication. For general-purpose application data, no -- use PostgreSQL or another ACID-compliant database as your source of truth and Redis as a cache or secondary store. Amazon MemoryDB blurs this line by providing Redis compatibility with multi-AZ durability, making it viable as a primary store for more workloads.

How much memory do I need for my cache?

Start with the working set -- the data your application accesses frequently. For most web applications, 10-20% of your total database size covers the hot data. A 100 GB PostgreSQL database typically needs 10-20 GB of cache. Monitor your cache hit rate after deployment: if it's below 90%, you either need more memory or your access patterns aren't cache-friendly. Redis's INFO memory command shows exact usage and fragmentation ratio.

What happens when Redis runs out of memory?

It depends on your maxmemory-policy setting. The most common policy is allkeys-lru, which evicts the least recently used keys to make room for new ones. Other options include volatile-lru (only evict keys with TTL set), allkeys-lfu (evict least frequently used), and noeviction (return errors on writes when full). For caching workloads, allkeys-lru is almost always the right choice. Memcached uses LRU eviction per slab class by default.

Can I run both Redis and Memcached together?

You can, and some large-scale systems do. Facebook famously uses Memcached for simple key-value caching (profile data, feed fragments) and Redis for features requiring data structures (counters, rate limits). But for most teams, running two caching systems doubles operational complexity for marginal benefit. Pick one. If in doubt, pick Redis -- it handles both workloads well enough, and you avoid managing two different systems.

The Verdict: Redis Wins for Most Teams

If you're starting a new project in 2026, choose Redis. Its data structure support, persistence options, clustering, and ecosystem are unmatched. The slight performance advantage Memcached holds for simple GET/SET workloads doesn't justify giving up Redis's versatility. You'll almost certainly need sorted sets, pub/sub, or Lua scripting eventually, and migrating caches mid-project is painful.

Choose Memcached only if you have a large-scale, simple caching workload where memory efficiency is critical and you genuinely don't need data structures, persistence, or replication. That's a narrower use case than most teams think. Redis is the safer, more flexible default -- and with io-threads enabled in Redis 7.x, the raw performance gap has narrowed to the point where it rarely matters.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.