Logstash only using 2 core out of 4 core server

Logstash v6.4.3
workers 8
heap size 5gb

Logstash only using 2 core out of 4 core system.

logstash config file is

input {
  beats {
    port => 9092
  }
}

#---------------------OUTPUT SECTION----------------------------------------
output {
  elasticsearch {
    hosts => ["10.0.0.12:5044", "10.0.0.16:5044", "10.0.0.14:5044"]
    manage_template => false
    validate_after_inactivity => 10000
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }

   stdout { 
    codec => rubydebug
  }
}

How to balance cpu core utilization ?

Reading the readme on this library, it is not so easy to set thread CPU affinity and I don't know how Windows works with thread affinity.

what is the relation with logstash and affinity ?

Logstash uses JRuby and runs on the Java JVM. The JVM delegates mapping of threads to cores to the OS.

So how do i solve this issue ?

i am using windows server 2012 R2.

We can't at the Logstash level. Maybe Windows is holding back two CPUs for other things - like being a server.

but other cores are less than 5% utilization

Do you really have no filters at all? Just one input and one output?

yes i am filtering beat data, i want all the data from beat

I believe both inputs and outputs are single threaded. If you have one input and one output then that only requires two threads, so it will only use two CPUs.

i am getting this error in logstash sometimes. any idea ?

[2019-01-17T09:12:06,095][INFO ][org.logstash.beats.BeatsHandler] [local: 10.0.0.7:9092, remote: 70.99.118.61:50455] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2019-01-17T09:12:06,095][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.1.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	... 8 more
[2019-01-17T09:12:06,095][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at 
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.1.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	... 8 more
[2019-01-17T09:12:06,111][INFO ][org.logstash.beats.BeatsHandler] [local: 10.0.0.7:9092, remote: 70.99.118.61:50455] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
[2019-01-17T09:12:06,111][INFO ][org.logstash.beats.BeatsHandler] [local: 10.0.0.7:9092, remote: 70.99.118.61:54238] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2019-01-17T09:12:06,126][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.1.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-all
	... 8 more
[2019-01-17T09:12:06,126][INFO ][org.logstash.beats.BeatsHandler] [local: 10.0.0.7:9092, remote: 70.99.118.61:50454] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84

Can you show us the beat and logstash configurations? Are there errors in the beat logs that correspond to the Invalid Frame errors (e.g. connection reset by peer)?

which beat ? i will show filebeat is that ok ? logstash conf as above

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log
  document_type: iis

  enabled: false

#============================= Filebeat modules ===============================

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  index.number_of_replicas: 1
  
setup.template.name: "filebeat-%{[beat.version]}-*"
setup.template.fields: "fields.yml"
setup.template.pattern: "filebeat-%{[beat.version]}-*"
setup.template.overwrite: true

#----------------------------- Logstash output --------------------------------
output.logstash:
  hosts: ["23.111.116.13:9092"]

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
#  hosts: ["localhost:5044"]
#  index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"

#================================ Logging =====================================

logging.level: info
logging.to_files: true
logging.files:
  path: ${path.config}/logs
  name: filebeat
  keepfiles: 10
  permissions: 0644

I find it impossible to reconcile the IP addresses in that logfile line with the configurations that you are posting. Your beats input appears to be listening on port 9092.

sorry, I edited config, I copied wrong cinf before. my logstash port is 9092. elastic port is 5044

Are you able to log in to 70.99.118.61 and use lsof to find what process has port 50455 open?

Is there any other way? That public ip used by more than 5 pc. so don't know which pc to check

Any other way to solve issue ?

Well I would guess there is something configured to expect 9092 to be elasticsearch, so you could switch your configuration around to have elasticsearch on 9092 and use 5044 for beats. This would have the added advantage of being consistent with the expectations of the rest of the planet.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.