Logstash - systemd-journald suppressed mensagens


Always when I restarting my logstash I see this message below:

Nowadays I have CPU problem and I am trying fix it.

I saw a person in the forum talk about change RateLimitBurst parameter into /etc/systemd/journald.conf.

I have changed this parameter for 0(unlimited) and realy my problem about this message and CPU overload resolved. However I saw that my logs ingest decrease 50%.

How I reach a mixed of ingest logs and rate limit burst? Is there a best practices?

P.S: I have a endpoint wich send 100000 messages each 30 seconds. When I stop this pipeline in logstash my CPU normalize.

Could help me about this issue ?

It looks like that the logstash.service is generating a lot of logs, which is not common unless something is not working right.

Do you have debug log enabled? Are you getting errors in logstash log file?

Also, any reason to give 64 GB of HEAP to Logstash? This seems to be too much memory for just logstash, tipically you would not need to even give more than 8 GB.

Can you share your logstash.yml, pipelines.yml file and logstash logs if you have WARN or ERROR lines?

Hi @leandrojmp, thank for answer my post.

What's debug log ? I use only logstash-plain.log.

My environment has 16 pipelines wich received average 9k e/s. Proxy pipeline that I think the root problem sent for logstash 3k e/s.

This pipeline use udp protocol. Is there logstash limitation about traffic?

I ask it because in this pipeline when I use TCP there is a delay about log working and when I use UDP lost some logs.

During last week a make one test:

  • Changed pipeline.workers from 8 to 32. I have 32 CPU core.

The result was the same of journal test above, 50% of traffc is lose or delay.

Logstash file:

Pipeline file:

If I blocked proxy pipeline de CPU usage decrease 50%.

Thank you.

I mean, if you have debug log level enabled in logstash.

This is expected, if your host is having performance issue, it may impact the connections, and UDP logs will be lost, depending on the source the tcp connections could be retried by the source.

Please share those pipelines as plain text, not as screenshots.

As I mentioned, if one of your pipelines has some errors, like some grok filter that is not working right, this could generate a lot of log errors and increase the load.

Hi @leandrojmp ,

I can't copy in plain text here because virtual environment is protected by transfer information.

There are two pipelines using the same port howerver different protocol. Proxy-udp using 10514 UDP and Proxy-TCP using 10514 TCP.

Can it produce any issue ?

Grok can be resource intensive and can lead to high cpu usage if you have a lot of messages that does not match your grok.

This could be your issue since you said that disabling this pipeline it will decrease the CPU usage.

Personally I avoid using grok unless there is no alternative, you can try to improve your grok with some small changes, I recommend that you read this blog post.

For example, your grok is not anchored, the first thing I would do is to anchor it, just add a ^ in the start of your patterns, something like this:


Also, it seems that your message is a CEF message, so it may be easier to use the cef codec in your input instead of the grok filter.

But you will need to test it.

Just to add to Leandro:

  • grok and GREEDYDATA consumes a lot time
  • there is also KV plugin for key=value for instance usrName=something, however you need to a partially parse before with grok or dissect something like:
    dissect {
      mapping => {  "message" => "%{timestamp} %{proxyname} %{processo}: %{msgfield}"  }    

   kv {  source => "msgfield"
	    field_split => '|'  }

Of course try first with the CEF codec.

@Rios @leandrojmp

Sorry for my delay for answer.

I've reviewed few topics on parsing and CPU use decrease 50%.

Before this review CPU reach out 60% and now only 25%.

I will try use KV to improve this pipeline.

Thank you.

1 Like

AFAIK, the best performances you will have with the dissect plugin.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.