Logstash restart - Exception in pipelineworker

Hello,

I have some logstash restart after this error :

[2019-08-05T14:12:13,259][ERROR][org.logstash.execution.WorkerLoop] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.
java.lang.InterruptedException: null
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) ~[?:1.8.0_161]
    at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) ~[?:1.8.0_161]
    at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:413) ~[?:1.8.0_161]
    at org.logstash.common.LsQueueUtils.drain(LsQueueUtils.java:86) ~[logstash-core.jar:?]
    at org.logstash.common.LsQueueUtils.drain(LsQueueUtils.java:56) ~[logstash-core.jar:?]
    at org.logstash.ext.JrubyMemoryReadClientExt.readBatch(JrubyMemoryReadClientExt.java:61) ~[logstash-core.jar:?]
    at org.logstash.execution.WorkerLoop.run(WorkerLoop.java:60) [logstash-core.jar:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:440) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:304) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:36) [jruby-complete-9.2.7.0.jar:?]
    at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$start_workers$2(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.Block.call(Block.java:124) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:295) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:274) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:270) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) [jruby-complete-9.2.7.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
[2019-08-05T14:12:13,516][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>java.lang.IllegalStateException: java.lang.InterruptedException, :backtrace=>["org.logstash.execution.WorkerLoop.run(org/logstash/execution/WorkerLoop.java:85)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:440)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:304)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:295)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:274)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:270)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}
[2019-08-05T14:12:13,847][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

I don't know why these restarts ...

Sincerely

HI @jmilot,

It shows that there is some configuration error in your pipeline in filter section.

Could you please share your configuration file for further investigation.

Regards,
Harsh Bajaj

My filter configuration :

filter {

  if "inwebo_access" in [tags] or "nginx-access" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{IPORHOST:nginx_source}(?:%{NOTSPACE})? - %{NOTSPACE:nginx_auth} \[%{HTTPDATE:nginx_timestamp}\]( *)\"%{WORD:nginx_method} %{URIPATHPARAM:nginx_uri} HTTP/%{NUMBER:nginx_httpversion}(( Host:)?%{IPORHOST:nginx_host_header})?\" %{NUMBER:nginx_status} %{NUMBER:nginx_bytes} %{QS:nginx_referrer} %{QS:nginx_agent} IP_FORWARDEE \"(%{DATA:nginx_ip_forwardee}|-)\" IP_PROXY_FORWARDEE \"%{IPORHOST:nginx_ip_proxy_forwardee}(, %{IPORHOST:nginx_ip_proxy_forwardee})?\""}
    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    geoip {
      source => "nginx_ip_proxy_forwardee"
      database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
    }

  }

}
filter {

  if "inwebo_error" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} (?<nginx_timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:nginx_severity}\] %{POSINT:nginx_pid}\#%{NUMBER}: %{GREEDYDATA:nginx_errormessage}(, client: (?<nginx_client>%{IP}|%{HOSTNAME}))(, server: (?<nginx_server>%{IPORHOST}|\$servername))?(, request: %{QS:nginx_request})?(, upstream: \"%{URI:nginx_upstream}\")?(, referrer: \"%{URI:nginx_referrer}\")?(, host: %{QS:nginx_host})?$" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

}
filter {

  if "logs_system" in [tags] {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: (\[%{LOGLEVEL:syslog_level}\]|)%{GREEDYDATA:syslog_message}" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

}
filter {

  if "nginx" in [tags] {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} (?<nginx_timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:nginx_severity}\] %{POSINT:nginx_pid}#%{NUMBER}: %{GREEDYDATA:nginx_message}" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

}
filter {

  if "zimbra" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      patterns_files_glob => "zimbra"
      match => { "message" => [ "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: %{AMAVIS}", "%{MAILLOG}", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: %{GREEDYDATA:zimbra_message}"] }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    if [amavis_sender] =~ '/.+/' {
        grok {
          match => { "amavis_sender" => "%{USERNAME:sender_username}@%{HOSTNAME:sender_domain}" }
          match => { "amavis_recipient" => "%{USERNAME:recipient_username}@%{HOSTNAME:recipient_domain}"  }

        }
    }
  }

}
filter {

  if "mailbox" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      patterns_files_glob => "zimbra"
      match => { "message" => [ "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{TIMESTAMP_ISO8601:mailbox_timestamp} %{LOGLEVEL:mailbox_loglevel}(%{SPACE})*\[%{DATA:mailbox_thread}\] \[name=%{EMAILADDRESS:email}(;aname=%{EMAILADDRESS:email_aname})?(;mid=%{NUMBER:mid})?(;oip=%{IPORHOST:mailbox_origin_host})?(;ip=%{IPORHOST:mailbox_source})?(;port=%{NUMBER:port})?(;ua=%{USERAGENT:mailbox_ua})?(;via=%{IPORHOST:audit_via}\(%{USERAGENT:audit_via_ua}\)(,%{IPORHOST:audit_via}\(%{USERAGENT:audit_via_ua}\))*)?(;soapId=%{USER:mailbox_soapid})?;\] %{WORD:mailbox_protocole} - %{GREEDYDATA:mailbox_message}", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{GREEDYDATA:mailbox_trace}"] }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

}
filter {

  if "audit" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{TIMESTAMP_ISO8601:audit_timestamp} %{LOGLEVEL:audit_loglevel}(%{SPACE})*\[%{DATA:audit_thread}\] \[(name=%{EMAILADDRESS:email};)?(aname=%{EMAILADDRESS:email_aname};)?(mid=%{NUMBER:mid};)?(ip=%{IPORHOST:audit_source}(, %{IPORHOST:audit_source})*;)?(oip=%{IPORHOST:audit_origin_host}(, %{IPORHOST:audit_origin_host})*;)?(port=%{NUMBER:port};)?(via=%{IPORHOST:audit_via}\(%{AUDIT_USERAGENT:audit_via_ua}\);)?(ua=%{AUDIT_USERAGENT:audit_ua};)?(soapId=%{USER:audit_soapid};)?\] %{WORD:audit_category} - (cmd=%{WORD:audit_action})?(%{AUDIT_DETAILS}|%{GREEDYDATA:audit_divers})" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    geoip {
      source => "audit_origin_host"
      database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
    }


  }

}

Sincerely

HI @jmilot,

You have added multiple filter pugins in your configuration.

Example with your configuration:

filter {
if{statement}
}
filter{
if{statement}
}

But actually it should be as below:

filter{
if (statement}
if {statement}
}

You dont need to add multiple filter section in your configuration. You just add one filter section and include everything in it.

Regards,
Harsh Bajaj

1 Like

Hi,

I update the configuration :

filter {

  if "inwebo_access" in [tags] or "nginx-access" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{IPORHOST:nginx_source}(?:%{NOTSPACE})? - %{NOTSPACE:nginx_auth} \[%{HTTPDATE:nginx_timestamp}\]( *)\"%{WORD:nginx_method} %{URIPATHPARAM:nginx_uri} HTTP/%{NUMBER:nginx_httpversion}(( Host:)?%{IPORHOST:nginx_host_header})?\" %{NUMBER:nginx_status} %{NUMBER:nginx_bytes} %{QS:nginx_referrer} %{QS:nginx_agent} IP_FORWARDEE \"(%{DATA:nginx_ip_forwardee}|-)\" IP_PROXY_FORWARDEE \"%{IPORHOST:nginx_ip_proxy_forwardee}(, %{IPORHOST:nginx_ip_proxy_forwardee})?\""}
    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    geoip {
      source => "nginx_ip_proxy_forwardee"
      database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
    }

  }

  if "inwebo_error" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} (?<nginx_timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:nginx_severity}\] %{POSINT:nginx_pid}\#%{NUMBER}: %{GREEDYDATA:nginx_errormessage}(, client: (?<nginx_client>%{IP}|%{HOSTNAME}))(, server: (?<nginx_server>%{IPORHOST}|\$servername))?(, request: %{QS:nginx_request})?(, upstream: \"%{URI:nginx_upstream}\")?(, referrer: \"%{URI:nginx_referrer}\")?(, host: %{QS:nginx_host})?$" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

  if "logs_system" in [tags] {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: (\[%{LOGLEVEL:syslog_level}\]|)%{GREEDYDATA:syslog_message}" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

  if "nginx" in [tags] {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} (?<nginx_timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:nginx_severity}\] %{POSINT:nginx_pid}#%{NUMBER}: %{GREEDYDATA:nginx_message}" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

  if "zimbra" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      patterns_files_glob => "zimbra"
      match => { "message" => [ "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: %{AMAVIS}", "%{MAILLOG}", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG}: %{GREEDYDATA:zimbra_message}"] }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    if [amavis_sender] =~ '/.+/' {
        grok {
          match => { "amavis_sender" => "%{USERNAME:sender_username}@%{HOSTNAME:sender_domain}" }
          match => { "amavis_recipient" => "%{USERNAME:recipient_username}@%{HOSTNAME:recipient_domain}"  }

        }
    }
  }

  if "mailbox" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      patterns_files_glob => "zimbra"
      match => { "message" => [ "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{TIMESTAMP_ISO8601:mailbox_timestamp} %{LOGLEVEL:mailbox_loglevel}(%{SPACE})*\[%{DATA:mailbox_thread}\] \[name=%{EMAILADDRESS:email}(;aname=%{EMAILADDRESS:email_aname})?(;mid=%{NUMBER:mid})?(;oip=%{IPORHOST:mailbox_origin_host})?(;ip=%{IPORHOST:mailbox_source})?(;port=%{NUMBER:port})?(;ua=%{USERAGENT:mailbox_ua})?(;via=%{IPORHOST:audit_via}\(%{USERAGENT:audit_via_ua}\)(,%{IPORHOST:audit_via}\(%{USERAGENT:audit_via_ua}\))*)?(;soapId=%{USER:mailbox_soapid})?;\] %{WORD:mailbox_protocole} - %{GREEDYDATA:mailbox_message}", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{GREEDYDATA:mailbox_trace}"] }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

  }

  if "audit" in [tags] {

    grok {

      patterns_dir => ["/etc/logstash/patterns"]
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_source} %{SYSLOGPROG} %{TIMESTAMP_ISO8601:audit_timestamp} %{LOGLEVEL:audit_loglevel}(%{SPACE})*\[%{DATA:audit_thread}\] \[(name=%{EMAILADDRESS:email};)?(aname=%{EMAILADDRESS:email_aname};)?(mid=%{NUMBER:mid};)?(ip=%{IPORHOST:audit_source}(, %{IPORHOST:audit_source})*;)?(oip=%{IPORHOST:audit_origin_host}(, %{IPORHOST:audit_origin_host})*;)?(port=%{NUMBER:port};)?(via=%{IPORHOST:audit_via}\(%{AUDIT_USERAGENT:audit_via_ua}\);)?(ua=%{AUDIT_USERAGENT:audit_ua};)?(soapId=%{USER:audit_soapid};)?\] %{WORD:audit_category} - (cmd=%{WORD:audit_action})?(%{AUDIT_DETAILS}|%{GREEDYDATA:audit_divers})" }

    }

    mutate {
      remove_field => [ "agent", "host", "log", "ecs", "input" ]
    }

    geoip {
      source => "audit_origin_host"
      database => "/etc/logstash/geoip/GeoLite2-City.mmdb"
    }


  }

  mutate {
    add_field => { "logstash_server" => "dxprostashme03" }
  }

}

I have three logstash servers and i have this error :

 [2019-08-08T15:40:32,003][ERROR][org.logstash.execution.WorkerLoop] Exception in pipelineworker, the pipeline stopped proce
ssing new events, please check your filter configuration and restart Logstash.
java.lang.InterruptedException: null
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1220) ~[?:1.8.0_161]
    at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:335) ~[?:1.8.0_161]
    at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:413) ~[?:1.8.0_161]
    at org.logstash.common.LsQueueUtils.drain(LsQueueUtils.java:86) ~[logstash-core.jar:?]
    at org.logstash.common.LsQueueUtils.drain(LsQueueUtils.java:56) ~[logstash-core.jar:?]
    at org.logstash.ext.JrubyMemoryReadClientExt.readBatch(JrubyMemoryReadClientExt.java:61) ~[logstash-core.jar:?]
    at org.logstash.execution.WorkerLoop.run(WorkerLoop.java:60) [logstash-core.jar:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:440) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:304) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:36) [jruby-complete-9.2.7.0.jar:?]
    at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$start_workers$2(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.runtime.Block.call(Block.java:124) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:295) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:274) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:270) [jruby-complete-9.2.7.0.jar:?]
    at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) [jruby-complete-9.2.7.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
[2019-08-08T15:40:32,239][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>java.lang.IllegalStateException: java.lang.InterruptedException, :backtrace=>["org.logstash.execution.WorkerLoop.run(org/logstash/execution/WorkerLoop.java:85)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:440)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:304)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:235)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:295)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:274)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:270)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}
[2019-08-08T15:40:32,702][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing

Some problem with the jvm ? I have only 2G RAM for the jvm and 4VCPU.

I have activated monitoring on logstash.

I see no more 500Mb Heap Memory is used . Why ? An idea?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.