How to combine 2 log entries to show in one line of Data Table based on a common field?

Hi Team,

I have my application log files in way that one entry has 'Start Time' field as timestamp (where as 'End Time' as NA) and another entry has 'End Time' field with timestamp('Start Time' field as NA) which have a unique id(transaction id) as common and those are loaded into Elasticsearch using Logstash(by required grok pattern).

Is it possible to combine them based on a common field and show in Data Table by having both Start Time and End Time fields filled?

For example(columns and log entries):
TransactionID | Method | Start Time | End Time

100 | ReadDatabase | Mon Feb 24 16:46:42 IST 2020 | NA
100 | ReadDatabase | NA | Mon Feb 24 16:47:44 IST 2020

I would like to show the data in Data Table by combining both of them based on Transaction Id

TransactionID | Method | Start Time | End Time

100 | ReadDatabase | Mon Feb 24 16:46:42 IST 2020 | Mon Feb 24 16:47:44 IST 2020

Any help will be highly appreciated. Thanks.

Hello @venkata.kodapaka

Have you tried aggregating based on TransactionID?

Hi @mattkime,

I couldn't get it how to do. Could you please help me on this.

Hey @venkata.kodapaka

I'd like to refer you to these posts on StackOverflow

And also this post on discuss (Logstash channel)

Usually, in such cases, you would want to aggregate your data in Logstash, rather than try to associate the two separate events in Kibana.

Hi @Liza_Katz,

Thanks for the links provided. I've gone through all of them and tried to use aggregate filter.
But seems it's working for few log entries and not for few.

Below is the filter configuration I created for logstash to load the log files data using aggregate.

grok {
match => { "message" => "%{GREEDYDATA:PreText} : For_Request_Dashboard-%{WORD:TransactionId}|%{WORD:Class}|%{WORD:Method}|%{USERNAME:User}|%{HOSTNAME:Host}|%{DATA:StartTime}|%{DATA:EndTime}|%{WORD:IsError}|%{NUMBER:TotalTimeTakenInMilliseconds:int}"}
remove_field => [ "message" ]
if [EndTime] == "NA" {
aggregate {
task_id => "%{TransactionId}"
code => "map['StartTimeNew'] = event.get('StartTime')"
map_action => "create"
if [StartTime] == "NA" {
aggregate {
task_id => "%{TransactionId}"
code => "event.set('StartTime', map['StartTimeNew'])"
map_action => "update"
end_of_task => true
timeout => 120

Sample logs for reference:

2020-02-24 16:46:44.402 INFO 9780 --- [http-nio-8080-exec-2] c.v.s.SpringBootTestForLogsApplication : For_Request_Dashboard-100|DatabaseUtils|read|venakta.kodapaka|L0418L|Mon Feb 24 16:46:42 IST 2020|NA|NA|0
2020-02-24 16:46:44.402 INFO 9780 --- [http-nio-8080-exec-2] c.v.s.SpringBootTestForLogsApplication : For_Request_Dashboard-100|DatabaseUtils|read|venakta.kodapaka|L0418L|NA|Mon Feb 24 16:46:44 IST 2020|false|2001

Is anything wrong in the configuration? Or is it expected way to behave like that(few times correct/few times not)?

Please help me on this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.