Hello !
Here is my input config :
input {
file{
path => "C:\Users\GAUTSCPI\Documents\Elasticsearch\Logs\CDR-SBC\prod/*"
start_position => "end"
tags => ["cdr"]
codec => plain {
charset => "ISO-8859-1"
}
}
}
I set the start_position to "end". If I am right this mean that logstash will read my file from the bottom to the top.
Then in the filter section I use the aggregate filter :
aggregate{
task_id => "%{ID appel1}"
code=>"
map [ 'Session number' ] ||= 0;
map [ 'Session number' ] += 1;
event.set('Order of treatment ', map ['Session number']);
if(map['Session number']==1)
map ['ID appel']=event.get('Ingress Session ID')
event.set('ID appel', map ['ID appel']);
else
event.set('ID appel', map ['ID appel']);
end"
push_map_as_event_on_timeout => false
timeout_task_id_field => "ID appel"
timeout_tags => ['_aggregatetimeout']
}
The field "Order of treatment " tell me which event came first in the aggregate filter, which came in the second position. Because I set the start_position to "end" the lowest event in the file should have the "Order of treatment" set to 1, the next event with the same "ID appel1" should have the Order of treatment" set to 2...
The problem is that sometime its the case and sometime not. The lowest event is not the first in the aggregate filter every time
Is that a problem of parallelism ? I think there is only one thread working per file so the higher line can't be treated before the lower one.
I use logstash 6.3.0 and the plugin file-input 4.1.5