Unfortunately I'm unsure if an upgrade or a filter modification caused this but Logstash is unable to bulk ingest a set of log files. This set has been bulk ingested many times previously, now Logstash just appears to stall, using 100% of all available threads without producing any output to Elasticsearch (which I've confirmed by checking Elasticsearch's active task list) nor does it produce any logs to logstash-plain.log
Removing all my configurations fixes it and I'm trying to determine which section is causing the problem, it seems wrongs that Logstash would completely lock up as opposed to producing an error message.
My config only uses core filters, two community filters and small bits of inline ruby to do maths. The stripped down version only includes 419 lines.
Hoping that someone else has encountered this recently and can advise? Thanks