
RabbitMQ 4.2 and SQL Filtering
Progress, but Mind the Gotchas
RabbitMQ 4.2 is coming, and Broadcom are understandably excited. Like most releases, it’s packed with tech promises that can be brilliant if you know how to implement them safely. We help teams cut through the marketing and work out what’s production-ready and what isn’t.
The headline feature: server-side SQL filtering for Streams. The broker itself can now filter messages before they hit your consumers. Combined with the existing Bloom filters, it promises huge bandwidth savings and the ability to chew through millions of messages per second without drowning your apps in irrelevant data.
At least, that’s the story. And parts of it are genuinely cool.
Let’s recap the upside before we go contrarian:
- Stage 1: Bloom filters (old but good): fast, cheap, probabilistic filtering at the chunk level. The broker can skip whole blocks of messages it knows don’t match.
- Stage 2: New SQL filters: fine-grained, per-message filtering on the broker using AMQP 1.0 semantics. Write a SQL-ish expression, and RabbitMQ only hands your client what you actually want.
- Together, they can take a 10 million message stream and reduce it to 10 relevant events in seconds. In Broadcom’s demo, combining Bloom + SQL got throughput close to 5 million messages/sec on a laptop.
For use cases like stock tickers or event streams where you only care about a tiny fraction of the feed, this is undeniably attractive.
…But…
The marketing gloss leaves out some big practical realities:
- Protocol gap: SQL filters are only supported over AMQP 1.0, not the classic AMQP 0.9.1 most RabbitMQ environments still run. And crucially, the dedicated Streams protocol, which RabbitMQ itself introduced for high-throughput streaming, does not support SQL filters either (at least today).
So if you’re happily using Streams over the Streams protocol, this feature is a non-starter unless you re-tool for AMQP 1.0. - Migration complexity: Mixing AMQP 0.9.1 and AMQP 1.0 safely is possible, but tricky. We’ve helped teams design clean boundaries and migration paths; the difference between a smooth rollout and a tangle of broken consumers.
- CPU trade-offs: Broker-side SQL shifts CPU load onto your RabbitMQ nodes. That’s fine, if you size and tune clusters correctly. We regularly benchmark and right-size clusters so teams don’t discover too late that their “free” filtering is actually choking throughput.
- Kafka comparisons are oversold: Broadcom are keen to say “we do filtering better than Kafka,” but Kafka pushes filtering out to KSQL rather than doing it inside the broker. That architecture isn’t perfect, but it’s a deliberate performance trade. RabbitMQ doing it all in-broker is elegant but can also become a single CPU bottleneck.
Where It Is Useful
If you’re already AMQP 1.0-enabled (some big financial organisations are), or you’re green-fielding a new streaming workload and don’t mind AMQP 1.0, this is a big win. For workloads where you know you’ll discard 99% of the data, Bloom+SQL is cleaner than rolling your own filtering farm downstream.
If you’re deep in classic AMQP 0.9.1 land, though, SQL filtering today is mostly a curiosity, and we don’t expect Broadcom to backport it. If they ever add SQL filtering to the Streams protocol, then it’ll be a game-changer for the majority of Rabbit users.
Our Take
RabbitMQ 4.2 moves the ball forward and shows the core team is serious about making Streams first-class. But be aware of the fine print before you go rewriting consumers or promising magic performance gains. This is great tech, with conditions.
As always, we recommend:
- Benchmark in your own environment: watch broker CPU and memory when enabling SQL filters.
- Plan protocol strategy early: don’t blindly mix AMQP 0.9.1 and 1.0 without a clear boundary.
- Keep an eye on Streams protocol support: if SQL filters arrive there, adoption calculus changes overnight.
Until then, Celebrate the progress but don’t assume this solves filtering pain automatically.
“If you’re wondering whether SQL filtering belongs in your stack, or how to avoid protocol dead-ends and CPU bottlenecks, that’s exactly what we advise on. Benchmark before you commit, we’ll show you how.”
Elodié Magnier | RabbitMQ Support Engineer




