Hello,
I am new to the ELK,i have noticed that certain rows of my CSV file that exceed about 250 characters are having issues being visualized in kibana after importing using logstash.
Is there a way to solve this?
Hello,
I am new to the ELK,i have noticed that certain rows of my CSV file that exceed about 250 characters are having issues being visualized in kibana after importing using logstash.
Is there a way to solve this?
What, exactly, do you mean by "having issues being visualized"?
thank you for the quick response!
How do i solve this?
Start by taking Elasticsearch out of the equation and use a simple stdout { codec => rubydebug }
output. Are you getting all expected events?
yes im getting all expected events in the shell running logstash
Can you show us the raw input (especially the longest line) and configuration you are using to consume it?
raw input longest line:
WebContainer : 4 - 2018-01-10 18:00:00.168 INFO c.i.g.s.s.e.l.SPLogger:63 - com.ida.gov.sg.sp.eai.interceptor.SingpassInterceptor | | UnAuthenticated URL list | [authnlogin, common, eservicelogout, tamoperationhandler, eservloginpage, loginpage, gettasconnection, errorpage]
config file:
input {
file {
path => "C:data\sui_testing.csv"
start_position =>"beginning"
sincedb_path => "/dev/null"
}
}
filter{
csv{
separator => ","
columns => [ "logs"]
}
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index=> "logs_testing"
}
}
I think this is related to the fact that message.keyword does not created for documents longer than 256 characters, but someone who actually understands anything about elasticsearch would be in a better position to explain that. I just do logstash. Magnus?
It's most likely the ignore_above
option in the mapping.
It's not clear why the OP wants to aggregate over the whole message
field in the first place. That's probably a mistake, and it's possible that the problem disappears if the line is correctly parsed into fields.
ok thank you for the feedback
thanks for the solution.Just a follow up to this,
Is it possible to use logstash to parse a textfile consisting a large of number logs?
if so,should the parsing be done within the configuration file?
Is it possible to use logstash to parse a textfile consisting a large of number logs?
Yes, of course.
if so,should the parsing be done within the configuration file?
You could use an Elasticsearch ingest pipeline, but apart from that I'm not sure where the parsing would otherwise take place.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.