Logstash input (kafka) add_field very slow

We are working towards very fast ingest into elastic search and found an unexpected slow spot. We have LS pulling from kafka which it can do very fast (5 consumer threads) - over 50k eps. However, if we add a single "add_field" do the kafka input, the events per second drops by 24k eps. This seems like a problem that could be addressed.

This testing is done with no filter statements and output to null. If we leave the add_field out of the kafka input statement, we can add lots of filter conditionals and complex grok statements (parsing up to 16 fields per message) and still outperform the performance of adding the single add_field statement to the input filter.

If we use "add_field" in the filter section, it does not slow LS down hardly at all.

Is there a way we could have the performance hit of "add_field" looked into when used in the input section?

Thanks.

Just out of curiously what where the specs of the system you were benchmarking with ?

Does the add field do any %blah token replacement? I've seen that being a bottleneck preciously.

Ouch, this is an old thread.

It is with a modern Linux box, 16 cpus, plenty of memory.

But, he is right, it was a long time ago and I don't know if this is still
the case of not. That was with LS 1.x and 2.x is current.
Thanks.