I installed ELK on my server. now I am trying to connect Filebeat with Logstash.
However I am having trouble with Logstash.
On my logstash log,
[WARN ] 2019-05-10 16:59:52.930 [nioEventLoopGroup-5-4] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:405) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:372) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:236) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-6.0.0.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
... 10 more
On my filebeat log,
2019-05-10T17:03:33.781+0900 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://localhost:5044)): Get http://localhost:5044: read tcp [::1]:39404->[::1]:5044: read: connection reset by peer
Invalid Frame Type means there is a protocol mismatch. Sometimes this is because one end expects to use SSL but the other does not. In your case I think it is because you have configured an elasticsearch output in filebeat.yml (so it uses HTTP) but pointed it to logstash (which expects the beats protocol).
I noticed similar warning on my logstash instance and decided I could post in here instead of opening new topic. I am using ELK 7.0.1:
[2019-05-13T08:35:59,723][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: -3
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:405) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:372) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:236) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: -3
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-6.0.0.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
... 10 more
My logstash pipeline configuration is:
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "elasticsearch.mydomain:9200"
user => "logstash_internal"
password => "mysecret"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
manage_template => true
}
}
I can see logs in Kibana, however I was just wondering, if this warning is serious or I can ignore it? I do not want to be losing some information in Kibana in the long run.
Hard to say. You need to review the data in Kibana and see if it appears to match the data you expect filebeat to be ingesting.
Something is connecting to logstash that is not sending valid lumberjack messages. If your logstash is exposed to the internet then other folks will be talking to it, guaranteed. If you are on a corporate intranet then you may well have internal security folks running port scanners to see which machines are running which services. That could also cause this error. Or it could be a misconfigured filebeat on one of your servers that is dropping all of its data. Only you can tell.
It really helped me.
In my filebeat.yml setting, I disabled output.logstash and enabled output.elasticsearch.
I handled it by enabling output.logstash and disabling output.elasticsearch.
Thank you
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.