Why NATS SubscribeSync Uses 512KB Per Subscription
A customer reported that NATS SubscribeSync() uses 0.5 MiB per subscription – 166x more than Redis SUBSCRIBE. For 10,000 user-specific subscriptions, that’s 5 GiB of memory just for subscription state.
The claim holds up empirically. Three approaches reduce memory by 94-99% with minimal code changes.
The Claim
The customer had a server subscribing to per-user NATS subjects using conn.SubscribeSync(). Each subscription consumed ~512 KiB. The equivalent Redis pattern used ~3 KiB per subscription.
Both numbers are correct. The difference is architectural, not a bug.
Why 512 KiB
NATS SubscribeSync() pre-allocates a 64k-slot Go channel buffer per subscription:1
// In SubscribeSync():
mch := make(chan *Msg, 65536) // 64k slots x 8 bytes per pointer = 512 KiB
This buffer exists for a reason: it provides high-performance, non-blocking message delivery. If a subscriber is slow, messages queue in the buffer rather than blocking the publisher or being dropped immediately. Each subscription is independently buffered, which means slow processing on one subscription doesn’t affect others.
Redis takes the opposite approach: minimal per-subscription state (~3 KiB hash table entry), no client-side buffering, immediate delivery to the socket. If the subscriber is slow, messages are lost or the client is disconnected.
The trade-off: NATS spends memory to protect against message loss during processing delays. Redis spends nothing and accepts the consequences.
Measured Results
I tested three approaches with 1,000 subscriptions each:
| Approach | Memory/Sub | 10k Subs | Reduction |
|---|---|---|---|
Default SubscribeSync() |
512 KiB | ~5 GiB | baseline |
SyncQueueLen(4096) |
32 KiB | ~330 MiB | 94% |
Async Subscribe() |
~1.5 KiB | ~15 MiB | 99.7% |
The Fixes
One-line fix: reduce the buffer
nc, _ := nats.Connect(nats.DefaultURL,
nats.SyncQueueLen(4096), // 4k slots instead of 64k
)
sub, _ := nc.SubscribeSync("subject")
// 32 KiB per subscription instead of 512 KiB
This trades buffer depth for memory. With 4,096 slots instead of 65,536, you can still absorb bursts of several thousand messages per subscription before back-pressure kicks in. For most workloads this is more than sufficient.
Better fix: use async subscriptions
nc, _ := nats.Connect(nats.DefaultURL)
nc.Subscribe("user."+userID, func(msg *nats.Msg) {
processMessage(msg)
})
// ~1.5 KiB per subscription
Async subscriptions share the connection’s internal dispatch mechanism rather than allocating a per-subscription channel. The memory reduction is dramatic: 99.7% compared to default SubscribeSync().
Best fix: rethink the subscription model
// Instead of 10,000 per-user subscriptions:
nc.Subscribe("user.*", func(msg *nats.Msg) {
userID := extractUserID(msg.Subject)
processMessage(userID, msg)
})
// Single subscription: ~1.5 KiB total
If all 10,000 user subjects share the same handler logic, a single wildcard subscription eliminates the problem entirely. Filter by subject in the handler.
When the default makes sense
The 64k buffer is not waste. It serves workloads where:
- Individual subscriptions receive high-volume bursts
- Processing latency varies and messages need to queue
- Slow consumers need protection from message loss
If your subscriptions are low-volume and high-cardinality (many subjects, few messages each), the default buffer is oversized. Tune it down or use async subscriptions.
The broader lesson
This is a common pattern when comparing messaging systems: the per-operation resource cost reflects a design decision about where to absorb failure. NATS absorbs slow consumers with client-side buffering. Redis absorbs them by dropping messages. Neither is wrong – they serve different operational models.
The key is knowing which knobs to turn when the defaults don’t fit your workload.
-
The
DefaultMaxChanLenconstant (64 * 1024 = 65,536) is defined in nats.go. TheSyncQueueLenoption overrides this per-connection. See SyncQueueLen. ↩︎