Packetbeat: Disable request parameters from http request to Elasticsearch

Dear Team,

May I request some to guide on how to configure packetbeat to avoid packetbeat pushing request(GET/POST) parameters to elasticsearch servers. this makes the payload huge.

Look forward to the response.


What's the exact problem with hughe payload? The generated document/event being to big or network usage?

At the moment one has to forward to logstash in order to manipulate the events before pushing to elasticsearch. With 5.0 release (currently at alpha3) we're adding some filtering support for selectively removing fields from events. See doc. In addition elasticsearch 5.0 will come with 'ingest node' support for setting up simple processing pipelines on ingest in elasticsearch.

Dear Stefen,

The document is huge and if it happens then storage increases drastically apart from network load.
I understand the data manipulation through logstash but I think configuration should be an easy option if possible.... since it is more catch or drop...


Perhaps filters could help you here (some only in 5.0.0):

I think generic filtering in beats would help then (currently available in 5.0alpha3). See links posted by @ruflin and me.

I understand.. but migration will call for impact analysis in comparison to earlier version.
Is impact analysis put down somewhere by someone... or again poc driven?


generic filtering is quite a new feature. We're working on getting some automated benchmarks in general, but nothing available in near future. Still, I'd expect filter performance heavily depending on actual event size and conditions being applied.

If you dont' use logstash as output, seems the only way to modify fields in current stable version is to change the source code and compile it, it worked for me.

This topic was automatically closed after 21 days. New replies are no longer allowed.