I'm ingesting a log file with Filebeat and sending to my ElasticSearch through Logstash. My log file has the following pattern:
Timestamp - TransactionId - Header (Username)
Timestamp - TransactionId - Events (This line may repeat multiple times)
Timestamp - TransactionId - Footer
The timestamp doesn't include milliseconds and sometimes the timestamp may repeat for the same transaction. I would like to make sure the order I see in Kibana is the same as in the file.
Today I use Aggregate to distribute fields I have in my header to all log entries. I've tried to create a field called "milliseconds" in the aggregation and increment it for each subsequent line that got matched. My intention was to add this field to the timestamp latter.
if [sync_type] == "start"
{
aggregate {
task_id => "%{transaction_id}"
code => "
map['milliseconds'] = 0
"
map_action => "create"
}
}
else if [sync_type] == "end"
{
aggregate {
task_id => "%{transaction_id}"
code => "
event.set('milliseconds', map['milliseconds'] + 1)
"
map_action => "update"
end_of_task => true
timeout => 120
}
}
else
{
aggregate {
task_id => "%{transaction_id}"
code => "
event.set('milliseconds', map['milliseconds'] + 1)
"
map_action => "update"
}
}
It didn't work... every line remains with the fields "milliseconds" as 1.
Any idea of how I can make this work?
Thanks.