CPU usage 100% when Elasticsearch is down (Act 2)

This is a continue of, CPU usage 100% when Elasticsearch is down, which was automatically closed during vacation.

The issue was this,
"I noticed that when Elasticsearch can't retrieve any more data the beats sending data went up to 100% CPU usage.
E.g. if the disc is full on the Elasticsearch server.
After clear up some space and "reset" Elasticsearch to be able to retrive data the CPU for the beats went down to normal behaviour.

It is never OK to use that much CPU and risk the other process running on the same machine.
Better loosing data than that the services stop working!

Is this a bug or can I config the beats in a better way?"

Here is the filebeat config:

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: false

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/*.log
      .....
      setup.kibana:

    Kibana Host

    Scheme and port can be left out and will be set to the default (http and 5601)

    In case you specify and additional path, the scheme is required: http://localhost:5601/path

    IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

    host: "10.0.2.15:5601"
    ......
    output.elasticsearch:

    Array of hosts to connect to.

    hosts: ["10.0.2.15:9200"]

I don't have any logs left from that time but it is easy to reproduce for you.
Just make sure the beat can't access elasticsearch and the CPU will consume 100%

/Fredrik

Actually I noticed that logstash does the same. CPU is at 200-300% constantly as long as elasticsearch is unavailble.

/Fredrik

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.