hello, i have been using configuring logstash for tuning and i need help. i set the JVM heap of elasticsearch to 8gb and logstash JVM heap to 6gb
my server has 4 cpus, 4 cores each 2.2ghz and 16gb RAM.
i set the number of workers of logstash to 16 and batch size to 8000 and batch delay to 50
300K events took like 3.45 mins. i noticed it is jumping to about 4800 events/s then falling back to 500- 900 . like in this picture
and here are stats from unix and metrics
can I keep the performance around 4800events/s or even more and I am running logstash and elasticsearch on the same node and here is mu logstash conf
input {
file {
path => "/data/elasticsearch/data/*_0004_OCC.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns =>
["100 column headers"]
}
mutate {
add_field => {
"Calldatetime" => "%{calldate} %{calltime}"
}
}
date {
match => ["Calldatetime", "YYYYMd HHmmss", "YYYYMdd HHmmss", "YYYYMMd HHmmss", "YYYYMMdd HHmmss"]
target => "Calldatetime"
locale => "en"
timezone => "UTC"
}
}
output {
stdout {
codec => dots