Logstash "idle" CPU is 100%

I've found several similar posts about logstash using "100%" CPU but none seem to be answered unless it was caused by a bad config.

I think our several logstash systems are functioning correctly, but all use at least all of 1 core (hence 100%) or more CPU. I think it's "normal", but others want answers because they see big numbers in top and other monitoring tools.

Is it normal for an idle but actively listening logstash to consume a core?

We are running 6.8.2 on current Centos Linux. There are few if any logstash log messages, none point to performance issues, sometimes we go days without a log message:-) We have 6 "main" logstash servers processing about 7500 events/sec (total) that show an average of 180% CPU. CPU Usage in the Kibana logstash monitoring appears to be normalized, dividing the percentage across total cores, so it shows in the 4-6% range (32 cores, 180/32 ~ 5.6%, so math works)

We some other small logstash servers for events that don't fit elsewhere, these process many fewer events 0-100/sec maybe. The lowest CPU on them is still 103%.

Any ideas?

If it is idle, no. On a t2.micro server a logstash instance that is tailing a file but not getting new events averages less than 5% of a core. At 100 events per second it would really depend on what the pipeline looks like. It is easy to write very expensive non-matching grok filters. geoip lookups are costly, and so on.

Well, these have multi-pipelines listening for network traffic. The filters can be expensive, but they shouldn't be at low use.