java.lang.StackOverflowError After Successfully Starting API Endpoint

Hello,

I am getting a java.lang.StackOverflowError after the Logstash API endpoint successfully starts. I cannot see any errors on the Elasticsearch side.

I am running Logstash, Elasticsearch, and Kibana, all on version 7.3.0.

jvm.options is running with 10GB of RAM for both initial and max heap size, and doesn't seem to be using all of this. The CPU does seem to sit very high when it's first starting as well. Sometimes after a while the error will resolve itself and Logstash will start normally but I cannot link it to anything I've done.

I'm running two pipelines, one is Elastiflow, the other is receiving syslog data via Filebeat.

[2019-09-12T14:10:15,774][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"elastiflow", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x558d6eb0@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38 run>"}
[2019-09-12T14:10:16,064][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"elastiflow"}
[2019-09-12T14:10:16,277][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:6343"}
[2019-09-12T14:10:16,499][WARN ][logstash.inputs.udp ] Unable to set receive_buffer_bytes to desired size. Requested 33554432 but obtained 212992 bytes.
[2019-09-12T14:10:16,507][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:6343", :receive_buffer_bytes=>"212992", :queue_size=>"4096"}
[2019-09-12T14:10:17,200][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-09-12T14:10:31,803][ERROR][org.logstash.Logstash ] java.lang.StackOverflowError

Any help would be appreciated and I can upload further details if required.

Can you turn log.level to debug mode?

@wangqinghuan

I've copied as much as I can into pastebin here: https://pastebin.com/hcb5hsLj

I've anonymised some of the data as it is network traffic data. My assumption to begin with was that the port was being flooded but even after turning polling off on the device that is sending data, it continued to have this issue.

It will also sometimes randomly stays stable until the service is stopped.