Increase in container memory when pipelines reload in logstash

We have a service for which certificate renewal happens for every half an hour. Whenever the certificate renewal happens , when the change is detected in the certificates, automatic reload happens in logstash and all the pipelines reload. During this, there is increase in container memory and the container is OOM killed. Is this a bug in logstash that memory increases when reload of pipelines happen?

Only 2 pipelines are running i.e logstash pipeline and search engine pipeline

cc : @Badger

@leandrojmp , could you leave your thoughts here...

Hello,

Sorry, I do not use containers I prefer to use VMs and never had this issue.

How much memory are you giving to Logstash? Try to increase it on the jvm.options.

Hi @leandrojmp , we have given 6gb of memory in kubernetes resources and jvm heap arund 4gb. During normal behaviour(without reloads ) the memory gets stabilized after sometime, but when pipeline reloads happen, the memory is gradually increasing. At some point the container gets OOM killed

Curious if you see similar behavior for those pipelines when running LS on VMs/BareMetal? My knee jerk reaction is that if there is a memory leak from pipeline reload, it has nothing to do with running LS on K8s.

Hi @Sunile_Manjee , @leandrojmp
Please have a look at the below processes output using top command


The java process is taking the highest memory and this is majorly contributing to the container memory too. This memory has been gradually increasing.

When I checked the memory mapping for this java process inside the container, below is the output. The first address(highlighted) which is mapped to anon has been increasing during the pipeline reloads in logstash.


The output of pmap is huge and I just kept a small snapshot of it. But majorly the first address RSS memory has been changing(increasing) during reload.

Could you help me understand why the RSS has been increasing and never free up?

I do not think there is a memory leak in core logstash when it reloads the pipelines, but there could be a memory leak in one of the inputs or outputs. What do the configurations of your pipelines look like?

1 Like

Hi @Badger ,
below is the conf

Data
====
pipelines.yml:
----
- pipeline.id: logstash
  queue.type: persisted
  queue.max_bytes: 1024mb
  path.config: "/opt/logstash/resource/logstash.conf"
- pipeline.id: opensearch
  queue.type: persisted
  queue.max_bytes: 1024mb
  path.config: "/opt/logstash/resource/searchengine.conf"

 

searchengine.conf:
----
input { pipeline { address => "searchengine_pipeline" }}
output {
  if [@metadata][LOGSTASH_OUTPUT_STDOUT_IS_ENABLED] == "true" {
    stdout { codec => rubydebug }
  }
  opensearch {
    hosts => ["${OPENSEARCH_HOSTS}"]
    index => "%{logplane}-%{+YYYY.MM.dd}"
    http_compression => true
    ssl => true
    cacert => "nnn"
    keystore => "eee"
    keystore_password => "%%KEYSTORE_PASS%%"
    ssl_certificate_verification => true
    manage_template => false
  }
}

 

logstash.conf:
----
input {
  beats {
    id => "abc"
    include_codec_tag => false
    port => 5044
    type => abc
    ssl_certificate => "mmm"
    ssl_key => "pppp"
    ssl_certificate_authorities => ["xxx", "yyy"]
    ssl => true
    client_inactivity_timeout => 300
    ssl_handshake_timeout => 10000
    ssl_verify_mode => "force_peer"
    ssl_supported_protocols => ["TLSv1.2", "TLSv1.3"]
  }
  http {
    port => 8080
    type => readiness
  }
}
filter {

 

  if [type] == "readiness" {
    drop {}
  }
  else if [type] == "http" {
    mutate {
      remove_field => [ "headers", "host"]
    }
  }
}
output {
  pipeline { send_to => "searchengine_pipeline" }
}

 

logstash.yml:
----
http.host: "0.0.0.0"
http.port: 9600
log.level: "info"
pipeline.workers: 2
pipeline.batch.size: 2048
pipeline.batch.delay: 50
path.logs: /opt/logstash/resource
pipeline.ecs_compatibility: disabled


BinaryData
====

Events:  <none>

below are the plugins and their corresponding versions:
logstash-output-opensearch (1.2.0)
logstash-input-beats (6.4.1)
logstash-input-http (3.6.0)

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

I tried to install the opensearch output and it trashed my logstash install to the point where I had to reinstall it! So I have no way to try to reproduce this.

Since you are using the logstash-output-opensearch plugin, can you try to replicate the same issue sending the data to a Elasticsearch cluster?

If this is a bug, Elastic will only consider a valid issue if you can replicate the same behavior using the Elasticsearch output.

@leandrojmp , @Badger

Even with logstash-output-elasticsearch (10.8.6), the issue is seen.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.