Kafka changed the industry by making event streaming practical at scale. Durable logs, ordering, fan-out, and backpressure turned event-driven systems from fragile prototypes into mainstream infrastructure.
Where things get messy is when teams push data processing into the streaming platform itself: Kafka Streams, ksqlDB, broker-side transforms. It starts as convenience and ends as operational coupling. Not because engineers are doing it wrong, but because the streaming layer and the processing layer solve different problems.
The evidence is public: vendor documentation, Kafka KIPs, and real migration stories.
The boundary in one picture
1. Replay-based state recovery does not age well
Kafka Streams restores state by replaying changelog topics. Simple mechanism, but recovery time grows with state size.
"Kafka Streams restore[s] the corresponding state store by replaying the changelog topic."
Confluent: Stateful fault tolerance in Kafka Streams
This becomes a production problem when state is large and failures are not hypothetical.
"Kafka Streams lacks a checkpointing mechanism for quick restoration after total system failures, leading to long recovery times."
Volt Active Data: Top 3 Kafka Streams Challenges
Processing engines like Apache Flink use checkpoint-based recovery. Benchmarks routinely show recovery in seconds even with large state.
2. Exactly-once semantics are narrower than most assume
Kafka's exactly-once is scoped to read-process-write within Kafka. The read and process phases are still at-least-once.
"Using transactions enables Exactly Once Semantics (EOS) ... (The read and process have at least once semantics)."
Spring Kafka: Exactly Once Semantics
Once your pipeline writes to databases, calls APIs, or touches external systems, you need idempotency and deduplication. Kafka transactions do not extend across system boundaries.
The scalability side is documented too. Before Kafka 2.5, exactly-once required one transactional producer per input partition.
"The simplest solution is to create a separate producer for every input partition ... This architecture does not scale well as the number of input partitions increases."
Apache Kafka KIP-447
3. ksqlDB migration stories show the operational limits
ksqlDB made stream processing approachable through SQL. The hard part is running it long enough that schema changes, resource contention, and pipeline lifecycle become daily concerns.
Riskified published their migration story: schema evolution that required dropping and recreating streams, resource isolation issues on shared clusters, and the path to managed Flink.
"ksqlDB's approach to schema evolution didn't automatically incorporate newly added fields."
AWS Big Data Blog: Riskified's journey to Flink (May 2025)
The same story highlights isolation as a platform feature rather than an application responsibility.
"Managed Flink provides true job isolation by running each streaming application in its dedicated cluster."
AWS Big Data Blog
4. Vendors draw boundaries around broker-side processing
Redpanda's Data Transforms documentation is explicit about what broker-side processing should and should not do:
"Transforms have no external access to disk or network resources ... Only single record transforms is supported ... For aggregations, joins, or complex transformations, consider using ... Apache Flink ... Up to 8 output topics are supported ... Transforms have at-least-once delivery."
Redpanda docs: Data Transforms
Confluent made a similar acknowledgment by acquiring Immerok to build a cloud-native Flink offering.
"Confluent signed a definitive agreement to acquire Immerok to accelerate the development of a cloud native Apache Flink offering."
Confluent press release (Jan 2023)
What this means in practice
Streaming platforms are built for durable transport: logs, ordering, fan-out, backpressure. They are not stateful compute engines with fast recovery, checkpoint coordination, and workload isolation.
When you couple transport and processing, scaling and recovery become coupled too. You cannot scale processing without scaling brokers. Compute costs get buried in messaging infrastructure.
The architecture that holds up under growth separates these concerns:
- Kafka, AutoMQ, or KafScale for transport
- Apache Flink for stateful stream processing
- Apache Wayang when pipelines need to run across multiple execution backends
"Apache Wayang aims at decoupling the business logic of data analytics applications from concrete data processing platforms, such as Apache Flink or Apache Spark."
Apache Wayang
Lightweight transformations inside the streaming layer still make sense: filtering, format normalization, simple enrichment. That work belongs close to transport when it stays simple.
Core business logic with state, joins, and external writes does not. That belongs in a processing engine designed to checkpoint state, isolate workloads, and recover predictably.
Where KafScale fits
We built KafScale around this boundary. A streaming platform should focus on transport and durability, running as stateless, schedulable infrastructure rather than embedded compute.
The design keeps brokers free of local state and avoids embedding processing into the broker fleet. Processing belongs in Apache Flink or wherever your workloads need it.
Architecture and docs: kafscale.io
Source: github.com/novatechflow/kafscale
Sources
- Confluent: Stateful fault tolerance in Kafka Streams
- Volt Active Data: Top 3 Kafka Streams Challenges
- Spring Kafka: Exactly Once Semantics
- Apache Kafka KIP-447
- AWS Big Data Blog: Riskified's journey to Flink (May 2025)
- Redpanda docs: Data Transforms
- Confluent: Immerok acquisition (Jan 2023)
- Apache Wayang
If you need help with distributed systems, backend engineering, or data platforms, check my Services.