Hi All,
I need help to parse data separated by ' ; ' from a txt file without key value pair using grok. Kindly advice.
Hi All,
I need help to parse data separated by ' ; ' from a txt file without key value pair using grok. Kindly advice.
Please provide an example of what the data looks like.
The data looks like below
2018-03-21;ABC;envint76;deploy_step;;Deploy OK;;;
2018-03-21;WXYZ;envint725;custom_tests;ITE;nbrun_1;1;8;
2018-03-21;DEF;envint76;sanity_check;ITE;nbrun_1;0;30;
As fields seem separated by ;
I would recommend using the dissect filter rather than grok.
Thanks for your input. Since Logstash is completely new, need assistance on the config.
Have tried to create as below. Am not sure about %{msg} mentioned on filter (Should I include it).
input {
file {
path => "/home/envdev80/tmp/stats_SF.txt"
start_position => "beginning"
}
}
filter {
dissect {
mapping => {
"message" => "%{Date};%{Team};%{Environment};%{TypeTests};%{runVersion};%{nbrun_sanityCount};%{NbErrors},%{total_count}: %{msg}"
}
}
}
Please advice.
This part does not seem to match the example messages you have shown. Based on the example data I would expect it to look something like this:
"message" => "%{Date};%{Team};%{Environment};%{TypeTests};%{runVersion};%{nbrun_sanityCount};%{NbErrors};%{total_count};%{msg}"
Thanks am able to parse the message, but any update of data on the source file stats_SF.txt is not getting reflected on the Kibana side. Help to advice on how to auto load the updated data onto kibana.
What do you mean by this? Is new data being appended but not processed?
yes, new data being appended but not processed automatically.
am able to process the available data now. I have another query is it possible to split a value (Deploy OK) which was parsed using the dissect filter. I mean I need to split Deploy and OK separated by space into to values for further processing.
All processing does not have to be done by a single filter - you can add any number of filters you need to further process the extracted fields.
Thanks, regarding auto processing of data available on the text file, unless I reload the index/config file the data's are not getting processed automatically. Please advice how to make the data available on kibana in run time. FYI currently there are multiple config files and we are trying to load all.
Can you please help me with a sample filter to achieve below condition.
If TypeTests=xyz I need to check the value of nbrun_sanityCount and split it based on space (Build OK)
Currently the filter is as below.
filter {
dissect {
mapping => {
"message" => "%{Date};%{Team};%{Environment};%{TypeTests};%{runVersion};%{nbrun_sanityCount};%{NbErrors};%{total_count};%{msg}"
periodic_flush => true
}
}
}
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.