Logstash HTTP Filter sending once per node

I currently have a Logstash pipeline with a JDBC input, a HTTP filter (to send externally), and an output to Elasticsearch. There is also two Logstash nodes running.

Everything is working as expected, except that a HTTP request is sent once for each node. The output would also trigger twice, but we are overwriting the document_id to ensure there is only one event.

My question is, is it possible, or what is the best practice, to ensure the HTTP filter only triggers once?

I have tried some settings and other means to try and get it working but with no luck. Hoping I am missing something short of running the pipeline on only a single node.


At least in theory you could configure the two logstash instances so that the HTTP filters cache their results in some external data store, and query that data store before making the HTTP call, but I suspect that would be more expensive than having each one do the HTTP call.

Sounds expensive considering the amount of events that will be passing through! Unfortunately, we don't control the other side where the HTTP request is heading and I'm not sure they would appreciate double the amount of logs being sent off to them.

Seems odd that the point of scaling Logstash is to ensure availability, but the main option is to only run a pipeline on a single node. What happens if that node is down?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.