Hi,
How can we help you ? 30 minuts ist definitly long time, but we have no idea about your configuration, where are you running logstash ? How much memory. Which version ? Can we have some debug logs, memory and host resources ?
I am using CentOs 7 linux OS
filebeat-5.1.1 , logstash-5.1.1 , elasticsearch-5.1.1,kibana-5.1.1-x86_64 all are installed on single instance of CentOS OS.
But presently i am using only filebeat and logstash and want to save the output of logstash in a csv file.
CentOs details -
CPU op-mode(s) : 32-bit, 64-bit
CPU(s) : 2
CPU MHz: 2333.000
output og free -h
total used free share buff/cache
Mem: 3.4G 2.6G 687M 7.3M 219M
Swap : 3.6G 411 3.2G
sample Text of logstash-plain.log
starting server on port :5443
starting pipeline
pipeline main started
successfully started logstah API
opening file{ path => "/out/csvdata.csv"}
Difficult to track the bottleneck here.
I would try to upgrade to newest version 5.4.1, if is not too much work.
Then check routing, connection to port and so on. I don't think is slow due to server performance, is more a settings problem.
I am using file output plugin to save data in csv file.
file
{ path => "D:/file.csv"
codec => line { format => "%{field1}, %{field2}, %{field3}"}
}
This works fine if i am using using single grok filter .
In my senario there are 3 grok filter, each grok is giving me a field.
e.g i am getting field1 through 1st grok , field2 through 2 and field3 throgh 3 , now when i am using same file output to save data in csv file . data is not properly formated and %{field} word is also coming multiple times.
So my question is how can i save data from multiple grok in a csv line file
I am using csv filter after grok filter to save output in a csv file but its giving me classcastexception:StringBiValue cannot be cast to java.lang.String
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.