Skip to main content

Memory consumption in Flume

Listen:

Memory required by each Source or Sink

The heap memory used by a single event is dominated by the data in the event body with some incremental usage by any headers added. So in general, a source or a sink will allocate roughly the size of the event body + maybe 100 bytes of headers (this is affected by headers added by the txn agent). To get the total memory used by a single batch, multiply your average (or 90th percentile) event
size (plus some additional buffer) by the maximum batch size. This will give you the memory needed by a single batch.

The memory required for each Sink is the memory needed for a single batch and the memory required for each Source is the memory needed for a single batch, multiplied by the number of clients simultaneously connected to the Source. Keep this in mind, and plan your event delivery according to your expected throughput.

Memory required by each File Channel

Under normal operation, each File Channel uses some heap memory and some direct memory. Give each File channel roughly 30MB of heap memory for basic operational overhead. Each File channel also needs an amount of direct memory roughly equal to 1MB + (capacity of channel * 8) bytes because Flume is storing the updates in a hashmap.

For fast replay without a checkpoint, the file channel can use up to (channel capacity * 32) bytes of heap memory. The amount of memory actually used depends on the number of events present in the log files being replayed. So, if the file channel holds 100 million events, the replay will require about 3.2 GB of heap memory. In order to enable fast checkpoint-less replay, you must set the configuration option use-fast-replay to true, i.e.:

agent.channels.ch-0.use-fast-replay = true

If that option is not explicitly enabled, then replay without a checkpoint will be slower, but it will use significantly less memory: on the order of normal operation of the file channel as specified above.
Memory required by the Flume core itself

Add to the total heap size roughly 50MB for the Flume core. Finally, adding a healthy buffer to calculated estimates is recommended. JVM memory usage in production can be monitored & graphed using JMX, to get a better understanding of real-world memory allocation behavior given a particular workload. I wrote a article about JMX monitoring in past.

Example memory settings

As you know, Flume reads out the environment variables over their flume-env.sh, which is disabled per default (named as $FLUME/conf/flume-env.sh.template). Simply rename them into flume-env.sh.template and tweak the settings according your requirements you have calculated. Also it is always a good idea to initialize the needed memory on startup, instead to add them later to avoid Juliet pauses when fresh memory will be allocated.

A example for a larger memory tweaking with GC tuning could look like

# sets minimum memory to 2GB, max to 16GB, max direct memory to 256MB
# also uses the parallel new and concurrent garbage collectors to reduce the likelihood of long stop-the-world GC pauses
JAVA_OPTS="-Xms2000m -Xmx16000m -Xss128k -XX:MaxDirectMemorySize=256m
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC"


Posted in Flume Wiki too

Comments

Popular posts from this blog

Why Is Customer Obsession Disappearing?

 It's wild that even with all the cool tech we've got these days, like AI solving complex equations and doing business across time zones in a flash, so many companies are still struggling with the basics: taking care of their customers.The drama around Coinbase's customer support is a prime example of even tech giants messing up. And it's not just Coinbase — it's a big-picture issue for the whole industry. At some point, the idea of "customer obsession" got replaced with "customer automation," and now we're seeing the problems that came with it. "Cases" What Not to Do Coinbase, as main example, has long been synonymous with making cryptocurrency accessible. Whether you’re a first-time buyer or a seasoned trader, their platform was once the gold standard for user experience. But lately, their customer support practices have been making headlines for all the wrong reasons: Coinbase - Stuck in the Loop:  Users have reported being caugh...

MySQL Scaling in 2024

When your MySQL database reaches its performance limits, vertical scaling through hardware upgrades provides a temporary solution. Long-term growth, though, requires a more comprehensive approach. This involves optimizing the database strategically and integrating complementary technologies. Caching The implementation of a caching layer, such as Memcached or Redis , can result in a notable reduction in the load and an increase ni performance at MySQL. In-memory stores cache data that is accessed frequently, enabling near-instantaneous responses and freeing the database for other tasks. For applications with heavy read traffic on relatively static data (e.g. product catalogues, user profiles), caching represents a low-effort, high-impact solution. Consider a online shop product catalogue with thousands of items. With each visit to the website, the application queries the database in order to retrieve product details. By using caching, the retrieved details can be stored in Memcached (a...

Can AI Really Code?

My upcoming novel,  Catalyst , is set in a world where AI is a major player in shaping the human future. I did some research into how AI is currently being used in software development and found that it has some amazing capabilities, but also some limitations that are a bit concerning. I'd even go so far as to say that those models are a bit of a hoax. They're impressive, but they don't actually solve anything. Yes, AI coding assistants like Devin and Copilot are impressive in demos and demo videos. In reality, they're not as powerful as you'd think, but they're great for simple tasks like crafting email parsing functions or authentication flows. However, I ran into some issues when I tried to use it in more complex situations. When I asked the AI to " write a connector from a database to ingest data into Spark ," it didn't understand and made mistakes. And that is a pure, simple and so well documented task that every non-coder could do that by sim...