Reduce filebeat cpu cost

Hi,

I use FileBeat to read a large volume of log, and then send data to ElasticSearch directly.

In my case, I found filebeat consume too much CPU resource. When sending 120,000 events/s, filebeat cost 320% CPU resource.

So I try to profile filebeat with net/http/pprof, and find a lot of resource is used in json serialization and common.ConvertToGenericEvent(event). Considering simplicity, I replace some json serialization with string concat and comment out common.ConvertToGenericEvent(event). After that, filebeat can send 120,000 events/s with only 150% CPU resource.

As the above modification isn't a common resolution, I wonder if somebody is already working on it?

Right, the JSON serialization and ConvertToGenericEvent are slow because they use reflection a lot. We need, for now, the ConvertToGenericEvent in order to support all the community beats.

We might be able to do reflection only in ConvertToGenericEvent though, because after that we know that the events have a short list of possible types. If you want to open a pull request with your approach, we can discuss it in more detail.

In general, we postponed this optimization because we want to move away from map[string]interface{} as the type for events and use something that gives us more type safeness. This will make this optimization easier and safer.

Yes, because I want that my users can just download an official filebeat whenever they use it. I think we need to think about an elegant resolution.
Is it good to add a switch for ConvertToGenericEvent? User who doesn't need it can easily configure it to false.

For JSON serialization, now I replace the following with string concat:
MarshalJSON() in libbeat/common/datetime.go
bulkMeta in libbeat/outputs/elasticsearch/client.go
but I don't have a better idea for event serialization, because it contains many different type values.

Do you have any good idea or another elegant resolution?

We're aware of some potential improvements in event encoding. Mostly reduce need for reflection and thusly reduce amount of garbage being generated.

The logstash ouput already uses a custom json encoder, which has been shown to be somewhat faster: https://github.com/elastic/beats/blob/master/libbeat/outputs/logstash/json.go

The logstash encoder relies on the fact, all events being normalized when entering the pipeline. That is, we do know about all potential types an events value can have.

The solution for the logstash output is basically a 'hack', but we might consider using it for the other outputs as well.

A better solution would try to generate even less garbage (event normalization and event generation in filebeat do allocate temporary events).

Benchmarks for some early experiments. The BenchmarkXJSONEnc uses some experimental (not yet finished) JSON encoder (only beat by optimized MessagePack encoder).

But don't expect improvements/changes soonish, as there is some more groundwork to do, in order to optimize/improve the publisher pipeline in beats. The ultimate goal is to not allocate any temporary objects in filebeat if no processors are configured.

That sounds great!

If some temporary solutions provided, it couldn't be better!

This topic was automatically closed after 21 days. New replies are no longer allowed.