This issue matches a few others I've seen. But the difference is I've already ensured my configs do NOT have the same issues that they had:
- SSL is turned off explicitly on both Logstash AND filebeat/metricbeat
- I am absolutely using the logstash output from filebeat/metricbeat and did not leave it as the default elasticsearch.
- There aren't any health checks. The kube liveness/readiness probes I have turned off for now.
- I'm running version 7.3.2 of Logstash, ElasticSearch, Filebeat, and Metricbeat
Logstash Config
input {
beats {
port => 7998
}
}
...
Filebeat/Metricbeat config
I would like to note two things:
- Both filebeat and metricbeat were working momentarily even with these logs, but it's off/on. Mostly off now. Results are the same for both (both broken or working at the same time)
- I'm using Terraform to template the configs so the configs absolutely match between filebeat/metricbeat.
filebeat.yml:
----
logging:
json: true
filebeat.inputs:
- type: docker
containers.ids:
- '*'
output.logstash:
hosts:
- 'logstash:7998'
ssl:
enabled: false
Error
About every 10 seconds Logstash outputs this:
[2019-09-27T00:27:19,807][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:7998, remote: undefined] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2019-09-27T00:27:19,807][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-6.0.0.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
... 8 more
Occasionally seeing Invalid Frame: 84
as well.
Logs from Filebeat/Metricbeat don't know anything besides their normal logging output.
Would also like to note I'm running everything in Kubernetes. I've double-checked the config files. I've exec'd into the containers and have made sure the filebeat.yml file matches the configs I want.
To be extra clear, I am not getting anything at all anymore.