Hi ,
Can someone help on grok pattern for the below logs
message#IE-7.0||trident/4.0#message
tried with below pattern
%{WORD:THREADNAME}#%{HOSTNAME:IE}||trident/%{JAVACLASS}#{%WORD:MESSAGE}
but not worked..please help
Regards
Muthu
Hi ,
Can someone help on grok pattern for the below logs
message#IE-7.0||trident/4.0#message
tried with below pattern
%{WORD:THREADNAME}#%{HOSTNAME:IE}||trident/%{JAVACLASS}#{%WORD:MESSAGE}
but not worked..please help
Regards
Muthu
|
has a special meaning and must be escaped. Otherwise it should work, even though using HOSTNAME and JAVACLASS doesn't really make sense.
Hi All,
my CSV File has one date field ,by default kibana is taking it as string how to change the type of the field. I'm not able to apply grok filter
Thanks in Advance
You can use a date filter to convert a field into an ISO8601 format that ES recognizes as a date, or you can set the index's mappings to recognize your particular date format as a date.
Below is my csv Data
JOB | STATUS | RESULT | STARTTIME | ENDTIME | ACTIVE | TRIGGERTIME | CUSTOMSTATUS | REGION | Date |
---|---|---|---|---|---|---|---|---|---|
Solr Indexed Property(es) | ABORTED | ERROR | Mon Jun 18 11:54:58 GMT 2018 | null | TRUE | [] | FAILED_ABORTED_JOB | EU | 11/13/2018 |
Solr Indexed Property(en_GB) | FINISHED | FAILURE | Mon Jul 02 15:50:18 GMT 2018 | Mon Jul 02 15:50:24 GMT 2018 | TRUE | [] | FAILED_ABORTED_JOB | EU | 11/13/2018 |
sk-tokoAutomatedReturnJob | FINISHED | FAILURE | Tue Jul 03 02:30:02 GMT 2018 | Tue Jul 03 02:30:02 GMT 2018 | TRUE | [Tue Jul 03 03:00:00 GMT 2018] | FAILED_ABORTED_JOB | EU | 11/13/2018 |
delete-esIndex-cronJob | ABORTED | FAILURE | Tue Jun 19 09:10:31 GMT 2018 | null | TRUE | [Tue Jul 03 02:47:15 GMT 2018] | FAILED_ABORTED_JOB | EU | 11/13/2018 |
update-beIndex-cronJob | ABORTED | FAILURE | Thu Jun 28 14:48:02 GMT 2018 | null | TRUE | [Tue Jul 03 02:46:51 GMT 2018] | FAILED_ABORTED_JOB | EU | 11/13/2018 |
and my config file is in below format
input
{
file
{
path => "/ELK/data/cronjobtrend.csv"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter
{
csv
{
separator=> ","
columns => ["JOB","STATUS","RESULT","STARTTIME","ENDTIME","ACTIVE","TRIGGERTIME","CUSTOMSTATUS","REGION","Date"]
}
}
output
{
elasticsearch
{
index => "cronjobtrend"
#document_type => "cronjobtrend"
}
}
where should i add the grok filter for Date column in the CSV and please provide me the syntax
You don't need a grok filter, you need a date filter. Two of them, in fact; one that processes STARTTIME
and one that processes ENDTIME
. The date filter documentation lists how the patterns for parsing timestamps are built up. You probably need something like EEE MMM dd HH:mm:ss 'GMT' yyyy
, assuming that the timezone is always GMT.
yes time is always GMT
Is the below one looks fine?
input
{
file
{
path => "/ELK/data/cronjobtrend.csv"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter
{
csv
{
separator=> ","
columns => ["JOB","STATUS","RESULT","STARTTIME","ENDTIME","ACTIVE","TRIGGERTIME","CUSTOMSTATUS","REGION","Date"]
}
date {
match => ["STARTTIME", "EEE MMM dd HH:mm:ss 'GMT' yyyy"]
target => "Date"
}
}
output
{
elasticsearch
{
index => "cronjobtrend"
#document_type => "cronjobtrend"
}
}
Thanks magnnusbaeck its working
Can you give the format for Date Column
Hi Magnusbaeck
can you provide me the format of date 11/13/2018
my config file is in below format
input
{
file
{
path => "/ELK/data/cronjobtrend.csv"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter
{
csv
{
separator=> ","
columns => ["JOB","STATUS","RESULT","STARTTIME","ENDTIME","ACTIVE","TRIGGERTIME","CUSTOMSTATUS","REGION","Date"]
}
date {
match => ["Date", "mm/dd/YYYY"]
target => "Date"
}
}
output
{
elasticsearch
{
index => "cronjobtrend_date"
#document_type => "cronjobtrend_date"
}
}
Check the date filter documentation. "mm" does not mean month.
input
{
file
{
path => "/ELK/data/cronjobtrend.csv"
start_position => "beginning"
sincedb_path => "NUL"
ignore_older => 0
}
}
filter
{
csv
{
separator=> ","
columns => ["JOB","STATUS","RESULT","STARTTIME","ENDTIME","ACTIVE","TRIGGERTIME","CUSTOMSTATUS","REGION","Date"]
}
date {
match => ["Date", "MM/dd/YYYY"]
target => "Date"
}
}
output
{
elasticsearch
{
index => "cronjobtrend_date"
#document_type => "cronjobtrend_date"
}
}
even though i tried the above one its not taking "Date" column is not taking as date format
Please show an example document stored in ES. Copy/paste the raw JSON from the JSON tab in Kibana's Discover view.
it's working fine
Thanks magnusbaeck
try that:
%{WORD:THREADNAME}#%{HOSTNAME:IE}||trident/%{JAVACLASS}#%{WORD:MESSAGE}
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.