Adding multiple unique ids to elapsed filter


(JC Miguel) #1

What I'd like to do is find out if my 'merRequest' logs have a 'merResponse'. From the results, I want to be able to view the log's 'invoice_id', 'm_id' and 'm_name' to be able to create visualizations involving the three fields within Kibana.

I was wondering if there was a way in which I could add multiple unique_ids to my elapsed filter in my Logstash pipeline file. Below is what I've currently written.

elapsed {
	start_tag => "merRequest"
	end_tag => "merResponse"
	unique_id_field => "invoice_id"
}

However, what I'd like is something along the lines of what's written below.

elapsed {
	start_tag => "merRequest"
	end_tag => "merResponse"
	unique_id_field => "invoice_id", "m_id", "m_name"
}

Or another alternative would be if someone could teach me how to add the 'elapsed_expired_error' tag to the start log.


(Christian Dahlqvist) #2

What is the high level problem you are trying to solve?


(JC Miguel) #3

Would like to apologize in advance if I've interpreted your question wrongly.

What I'm trying to do is find the information ('invoice_id', 'merchant_id', 'merchant_name') of dropped transactions. In my case, this refers to the 'merRequest' logs without a 'merResponse' log.

By using the elapsed filter, I'm currently able to only identify the 'inovice_id', which is the elapsed filter's unique_id_field.


(Christian Dahlqvist) #4

The reason I am asking is that this can sometime be difficult or inefficient to do in the ingest pipeline, as these filters require all related events to pass through the same thread, which means it does not scale or perform very well. It can also be sensitive log Logstash restarts as data is held solely in memory. Sometimes it may be more efficient to periodically search for this once the data has been indexed into Elasticsearch and process it using a batch job.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.