Filebeat to Logstash - "client is not connected"

logstash gist

maybe its related to this?

but why its running ok for 5 minutes?

now i reset the bulksize to 256 , restart filebeat , amd logstash and everything is back to normal.

I don't see anything in the currently provided log anything that could make Logstash in a crash state.

Any errors after the following lines?:

[2018-01-26T16:19:50,886][TRACE][org.logstash.beats.BeatsParser] Transition, from: READ_HEADER, to: READ_FRAME_TYPE, requiring 1 bytes
[2018-01-26T16:19:50,886][TRACE][org.logstash.beats.BeatsParser] Transition, from: READ_FRAME_TYPE, to: READ_JSON_HEADER, requiring 8 bytes

Also can you add your Logstash pipeline configuration, make sure to remove any credentials.

No errors after that. the only erro i see in logstash is that cgroup related. everything is running on docker 13 , elasticsearch , logstash and kibana.

logstash pipeline:

bash-4.2$ cat /usr/share/logstash/pipeline/logstash.conf 
input {
  beats {
    port => "5043"
    client_inactivity_timeout => 40000
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}"}
  }
  ip2location {
    source => "clientip"
    database => "/usr/share/logstash/ip2location.BIN"
  }
  geoip {
  source => "clientip"
  }
}


output {
  elasticsearch {
    hosts => [ "localhost:9200" ]
  }
}

That hat error should not impact your ingestion, its done in another thread and this logging is from a rescue.

the point is , all these settings and setup only works on bulk size of 256 or lower , I have 3 elk nodes each with 32 GB ram and 8 cores.

How many clients have you connected to this Logstash instance?

I have 3 similar logstash instance all of them same config and specs. and three clinets.

each client sending to each elk , one to one mapping.

all three ELKs are part of one cluster

kibana is installed on only one of them and elasticsearch and logstash is installed on all of them.
logstash ingest port is available on all of them.

I have 3 similar logstash instance all of them same config and specs. and three clinets.

So you only have 3 Filebeat clients connected to that specific Logstash instance?
If this is correct that number is pretty low and Logstash should not have any problem dealing with that.

Questions:

  1. Did you try removing the ip2location from the filter? Does this affect performance?

  2. What is the output of the following command: bin/logstash version

  3. What is the output of the following command: bin/logstash-plugin list --verbose beats

hi ,

this time i got a new error in the log , which seems more informative

	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
[2018-01-28T06:53:06,717][WARN ][io.netty.util.concurrent.SingleThreadEventExecutor] Unexpected exception from an event executor: 
java.lang.OutOfMemoryError: Java heap space
[2018-01-28T07:40:48,653][ERROR][logstash.filters.grok    ] Error while attempting to check/cancel excessively long grok patterns {:message=>"Java heap space", :class=>"Java::JavaLang::OutOfMemoryError", :backtrace=>[]}
Unhandled Java exception: java.lang.InternalError: BMH.reinvoke=Lambda(a0:L/SpeciesData<LL>,a1:L)=>{
    t2:L=BoundMethodHandle$Species_LL.argL1(a0:L);
    t3:L=BoundMethodHandle$Species_LL.argL0(a0:L);
    t4:V=MethodHandle.invokeBasic(t3:L,t2:L,a1:L);void}
Exception in thread "LogStash::Runner" java.lang.OutOfMemoryError: Java heap space

But I looked at my logstash memory config , it using 8g , which should be enough for one client.?

bash-4.2$ ps -aef | grep logstash
logstash     1     0 99 13:09 ?        00:05:22 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Xmx8g -Xms8g -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb

why the OOM killer kills it? its a machine with 32g ram , 16gb for elastic heap size and 8g configured for lostash , and still 8g for other stuff.

Is there a memory leak in logstash 6?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.