Logstash JVM heap and GC issues

Hi I've got an issue with my JVM heap steadily increasing until all the space is used up.

Seems that the garbage collection is removing less and less space until eventually the heap is used up.
Here is a picture of what's going on.

This is a small test lab environment to familiarize myself with the ins and outs of the ELK stack.

Here is my configuration files for reference.

Pipe 1

input {
beats {
port => 5043
}
}

output {
elasticsearch {
hosts => ["http://10.1.1.22:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

Pipe 2

input {
udp {
port => 9996
codec => netflow {
versions => [5, 9]
}
type => netflow
tags => "port_9996"
}
udp {
port => 9995
codec => netflow {
versions => [5, 9]
}
type => netflow
tags => "port_9995"
}
}
output {
if "port_9996" in [tags] {
elasticsearch {
hosts => ["10.1.1.22:9200"]
index => "logstash-netflow-9996-%{+YYYY.MM.dd}"
}
} else if "port_9995" in [tags] {
elasticsearch {
hosts => ["10.1.1.22:9200"]
index => "logstash-netflow-9995-%{+YYYY.MM.dd}"
}
}
}

Pipe 3

input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => ["http://10.1.1.22:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

All components are of the lastest version, all running on Openjdk 1.8. as well as x-pack basic.

Logstash is running on it's own VM with Elastic and Kibana on another.

Any help is greatly appreciated!

Found what seems to be the cause of a memory leak.

The pipeline I've setup for winlogbeat seems to be what is causing the issues here.

Netflows + metricbeat function fine.

Winlogbeat causes logstash to have issues dumping memory in garbage collection. Perhaps there is some kind of optimization/configuration I can change to adjust to fix the garbage collection?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.