Logstash failing on InvalidFrameProtocolException: Invalid Frame Type, received: 69/84

Hello,

Using logstash version 6.5.4 and filebeats version 6.4.2.

Sometimes logstash is going down because of this, but couldn't find any misconfiguration that is causing this issue, can someone please help me out on this.

Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69

io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84

Filebeats Configuration:

  inputs:
    path: ${path.config}/inputs.d/*.yml
    reload.enabled: false
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false


# How long Filebeat waits on shutdown for the publisher to finish sending events before Filebeat shuts down.
# By default, this option is disabled, and Filebeat does not wait for the publisher to finish sending events before shutting down. 
# This means that any events sent to the output, but not acknowledged before Filebeat shuts down, are sent again when you restart Filebeat.

filebeat.shutdown_timeout: 10s

############################# stats ##########################################

http.enabled: true
http.host: localhost
http.port: 5066

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by filebeat, You can enable one or multiple outputs by setting enabled option to true.

output.logstash:



  hosts: ['logstash-0','logstash-1','logstash-2','logstash-3','logstash-4']
  loadbalance: true
  worker: ${Worker}
  ssl.enabled: false



# output.elasticsearch:
#   hosts: ["https://localhost:9200"]
#   index: "filebeat-%{[kubernetes.container.name]}-%{+yyyy.MM.dd}"
#   protocol: http
#   setup.template.name: "elasticsearchtemplate"
#   setup.template.fields: "path/to/fields.yml"
#   setup.template.overwrite: true

############################# Logging #########################################
logging.level: ${LOG_LEVEL}
logging.selectors: ["*"]
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/filebeat
  name: filebeat.log
  keepfiles: 7
  permissions: 0644


############################# xpack #########################################
xpack.security.enabled: false
xpack.monitoring.enabled: false
xpack.graph.enabled: false
xpack.watcher.enabled: false
xpack.reporting.enabled: false

Logstash Configuration:

input 
{
  beats 
  {
    port => 5044
  }
}

output 
{
  if [kubernetes] and [kubernetes][container][name]
  {
      elasticsearch 
      {
        hosts => ['es-1','es-2','es-3']
        index => "%{[kubernetes][container][name]}-logstash-%{+YYYY.MM.dd}"
        # document_type => "%{[@metadata][type]}"
        ssl_certificate_verification => false
      } 
  }
}

Something is connecting to port 5044 that does not speak the Beats/Lumberjack protocol. If you have multiple beats, verify that ssl is disabled on all of them.

It's not easy to find what is connecting. You could use tcpdump/Wireshark to monitor all connections to port 5044 and find what port number is at the remote end. If that turns out to be a machine you control you can then log in and try to find what program is connecting and reconfigure it.

I'm running Logstash in one of the Kubernetes Clusters in AWS. Running 5 logstash components and exposing them as load balancers. All I did was giving those load balancers in filebeats. Load Balancer is exposed as tcp protocol and health check on load balancer is this:

Ping Target TCP:portnumber
Timeout 5 seconds
Interval 10 seconds
Unhealthy threshold 6
Healthy threshold 2

Purposely I disabled ssl in filebeat and logstash.

Are you saying there is a load balancer in front of logstash?

Yes I'm using load balancer infront of logstash,

Here is the only annotation I'm using:

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

That suggests the load balancer is doing some sort of ECV. That is, the health check is more than just establishing a TCP connection and dropping it. The LB is actually sending traffic. You need to reconfigure it to drop sending data.

But I don't see any extended verification health check is being done by load balancer. It is being exposed as tcp load balancer and even I did not use proxy protocol purposely. Logstash is intermittently going down and recovering again.

@Badger, Thank you :grinning: looks like that is the culprit, I changed the health check to a different port, did not see that issue again.
I will update in case if I see issues again.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.