hi all, I'm a beginner of elk.
i'm trying to extract fields from log message event in logstash.
after logstash extract fields from message, can i get back original message?
and is there anyway to get back original message after grokking?
hi all, I'm a beginner of elk.
i'm trying to extract fields from log message event in logstash.
after logstash extract fields from message, can i get back original message?
and is there anyway to get back original message after grokking?
Unless you explicitly overwrite the original message it won't be lost.
hi magnusbaeck,
thank you for reply.
but i don't use overwrite option in grok filter.
i can see original msg in json tab.
but i don't see origial msg in table tab.
what should i do to see original raw data in table tab?
why original message can not be seen in table tab?
This text will be hidden
So what does the message
field contain if not the original message? You haven't provided us with an original message nor your configuration so it's nearly impossible to give specific help.
okie, here is msg.
logstash config:
input {
cloudwatch_logs {
log_group => [ "TEST" ]
access_key_id => "blablabla"
secret_access_key => "blablabla"
region => "ap-southeast-1"
start_position=> "end"
interval => 60
}
}
filter {
if [cloudwatch_logs.log_stream] == 'TESTTEST' {
grok {#"patterns_dir" => ["/patterns/patterns1", "/patterns/patterns2"] break_on_match => true #grok custom patterns match => { #match => { "message" => "%{SYSLOGBASE} %{POSTFIX_QUEUEID:queue_id}: %{GREEDYDATA:syslog_message}" } "message" => [ #here is my grok... "(?<threadno>\d+)\s{1,}%{WORD:log_level}\s{0,}:\s{0,}PaymentV3UI.Controllers.PaymentController\s{0,}:\s{0,}(?<session_id>\w+)\s{0,}:::\s{0,}Payment\sRequest\s:\s<PaymentRequest><version>(?<version>[0-9.]{0,})</version><merchantID>(?<mid>.*?)</merchantID><uniqueTransactionCode>(?<invoice_id>.*?)\</uniqueTransactionCode\><desc>(?<desc>.*?)</desc><amt>(?<amount>[0-9.]{1,})</amt><currencyCode>(?<currency_code>.*?)</currencyCode><pan>(?<pan>.*?)</pan>.*<statementDescriptor>(?<statement_descriptor>.*?)</statementDescriptor><statementDescriptorVersion>" #others... ] } }
}
}
output {
elasticsearch {
hosts => [ "localhost:9200"]
#cluster => "logstash"
#protocol => "http"
#index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
#document_type => "%{[@metadata][type]}"
#ssl => false
flush_size => 512
#index => "production-logs-%{+YYYY.MM.dd}"
timeout => 100
}
stdout {
codec => rubydebug
}
}
(1) and (3) look identical to me. If (2) really shows the exact same document it looks like a display bug in Kibana.
Yes, i show exact doc. magnusbaeck. so, if bug in kibana, could you guide me what i should fix?
i have such kind of logsssss
because of usage capture regex part??? (?<threadno>\d+),<desc>(?<desc>.*?)</desc>,etc...
if bug in kibana, could you guide me what i should fix?
Report a bug in the Kibana GitHub project. Make sure you include all the necessary evidence. For example, export the raw JSON document from ES.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.