Find number of matching pair, display in pie chart

Hi,

Say i have a table of records like below:
image

Is it possible to use Kibana visualization to find and display the total number of matching pair where FundA is follow by DD (within the same SessionID) and the total number of pair where FundA does not follow by DD.

In the above sample data, there are two pairs where FundA is follow by DD, and 6 pairs where FundA does not follow by DD. When display in a pie chart, it shall show something like below:

image

I think i have to transform the data into new index with single line of record per session per user. And at the same time, add new field to indicate the type of matching pair for the pie chart.
Now the problem i am facing is, how to loop through the data and find the matching pair. Any idea?

currently i have the following config setting which concatenate the formid together, group by sessionid.

input {

file {

path => "/usr/share/logstash/data/ML/t_process_log_202005_test.csv"

start_position => "beginning"

sincedb_path => "/dev/null"

}

}

filter {

csv {

separator => "|"

#skip_header => "true"

#skip_empty_rows => "true"

columns => ["ApplicationId","UserId","ProcessId","FormId","Action","Status","Reference","TranRef","IP","CreationDate","SessionId"]

skip_empty_columns => "true"

}

mutate {

  gsub => ["message","\r\n",""]

}

mutate {

  gsub => ["message","\r",""]

}

mutate {

  gsub => ["message","\n",""]

}

if ![message] {

  drop { }

}

mutate {

add_field => ["Data_Source", "Web1" ]

}

date {

match => ["CreationDate","ISO8601"]

    timezone => "Asia/Singapore"

    target => "CreationDate"

}

mutate {

add_field => {"keyfield" => "%{UserId}%{SessionId}" }

}

}

output{

elasticsearch {

hosts => "https://myhost:9200"

index => "table_transform"

user => "elastic" 

password => "password"

ssl => true

ssl_certificate_verification => false

cacert => "/etc/elasticsearch/certs/cert1.crt"

document_id => "%{keyfield}"

doc_as_upsert => true

script => 'ctx._source.FormId += " %{[FormId]}"'

action => "update"

}

stdout{}

}

And i tried to add the following elasticsearch filter plugin to read from 'table_transform' index and thinking to check if current record found inside the 'table_transform' index, if found then add new field to indicate the pattern. This new field needed because (FundA,II) and (FundA,DD) belong to same pattern. And this pattern value will be used as the term aggregation in the pie chart.

elasticsearch {

hosts => ["https://myhost:9200"]

index => "table_transform"

user => "elastic" 

password => "password"

ca_file => "/etc/elasticsearch/certs/cert1.cer"

query => "keyfield:%{[keyfield]}"

fields => {"formidlist" => "new_FormId"}

}

however, i receive the following error when i run logstash.

[0] "_elasticsearch_lookup_failure"

I assume that logstash will read one record and insert and then read another record and insert.....but somehow logstash doesn't work in the way i imagine it will be.

How should i modify my code ? Or how does logstash function ?

@Marius_Dragomir can I get your eyes on this please? thanks!

It's a bit out of scope for Kibana, this is a transforms/logstash team question.
Best advice I can give is too look at this tutorial on Data Transforms https://www.elastic.co/guide/en/elasticsearch/reference/current/ecommerce-transforms.html and from an initial look I would group the data based on SessionID.

Thanks for the advise.
At first, it was a kibana question. But i figure probably kibana cannot do that on the fly, so i thought i need to pre-process the data (transform), and thus, i turn my focus to logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.