Invalid Frame Type, received: 82 and 79

I've deploy Logstash (6.3.1) behind kubernetes ingress nginx that expose via TCP method
and ELB (kubernetes service type is LoadBalancer).

kind: ConfigMap
apiVersion: v1
metadata:
  name: configmap-ingress-tcp-conf
  namespace: default
data:
  5044: "default/logstash:5044:PROXY:PROXY"

I'm able to shipping a logs when Logstash is on the same cluster as Filebeat.
However, when I try to config Filebeat to shipping a log across a cluster (Filebeat and Logstash are on different cluster) an exception happens.

[2018-07-16T08:48:49,697][INFO ][org.logstash.beats.BeatsHandler] [local: 100.111.253.3:5044, remote: 100.111.253.4:43048] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 82
[2018-07-16T08:48:49,701][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 82
  at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:353) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 82
  at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.0.16.jar:?]
  at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
  ... 8 more
[2018-07-16T08:48:49,702][INFO ][org.logstash.beats.BeatsHandler] [local: 100.111.253.3:5044, remote: 100.111.253.4:43048] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 79
[2018-07-16T08:48:49,702][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 79

p.s. I'm using the same pipeline

input {
  beats {
    port => 5044
  }
}


filter {
  json {
    skip_on_invalid_json => true
    source => "message"
  }
}

output {
  elasticsearch {
    hosts       => ["https://xxxxxxx.ap-southeast-1.aws.found.io:9243"]
    user        => xxxxxx
    password    => xxxxxx
    index       => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    manage_template => false
  }
}

That suggests that what is writing to port 5044 is not a beat. A tcp output perhaps, or something else that does not speak the beat protocol on the wire.

Thank you, I'll try to figure out what's wrong with the opening port.

I'm in a nearly identical situation.

Here's the relevant NGINX configuration:

stream {
    log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
    access_log /var/log/nginx/access.log log_stream;
    error_log  /var/log/nginx/error.log;

    # TCP services
    upstream tcp-5045-kube-system-logstash-classic-5045 {
    	server                  100.96.7.163:5045;
    	server                  100.96.8.246:5045;
    }
    server {
    	listen                  5045 proxy_protocol;
    	listen                  [::]:5045 proxy_protocol;
    	
    	proxy_timeout           600s;
    	proxy_pass              tcp-5045-kube-system-logstash-classic-5045;
    	proxy_protocol          on;	
    }	
}

Logstash input config:

input {
    beats {
        port => 5045
        host => "0.0.0.0"
    }
}

Filebeat output config:

output.logstash:
  hosts: ["${LOGSTASH_SERVICE_HOST}:${LOGSTASH_SERVICE_PORT}"]

Here's what some of the failed connections look like in the NGINX log:

2018/07/23 20:03:27 [error] 1471#1471: *3552 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 100.96.8.0, server: 0.0.0.0:5045, upstream: "100.96.8.246:5045", bytes from/to client:0/0, bytes from/to upstream:0/46
[23/Jul/2018:20:03:27 +0000]TCP200000.006
2018/07/23 20:03:28 [error] 1471#1471: *3554 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 100.96.8.0, server: 0.0.0.0:5045, upstream: "100.96.7.163:5045", bytes from/to client:0/0, bytes from/to upstream:0/46
[23/Jul/2018:20:03:28 +0000]TCP200000.006
2018/07/23 20:03:29 [error] 1471#1471: *3556 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 100.96.7.1, server: 0.0.0.0:5045, upstream: "100.96.8.246:5045", bytes from/to client:0/0, bytes from/to upstream:0/47
[23/Jul/2018:20:03:29 +0000]TCP200000.008
2018/07/23 20:03:30 [error] 1471#1471: *3558 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 100.96.6.0, server: 0.0.0.0:5045, upstream: "100.96.7.163:5045", bytes from/to client:0/0, bytes from/to upstream:0/46
[23/Jul/2018:20:03:30 +0000]TCP200000.006
[23/Jul/2018:20:03:32 +0000]TCP200000.007

Here's what Logstash says:

[INFO ] 2018-07-23 20:04:05.012 [defaultEventExecutorGroup-5-2] BeatsHandler - [local: 100.96.7.163:5045, remote: 100.96.7.160:34374] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 79
[WARN ] 2018-07-23 20:04:05.012 [nioEventLoopGroup-3-3] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 79
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:392) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:359) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:236) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 79
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.0.16.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]
	... 10 more

Here's what Filebeat says:

ERROR	logstash/async.go:235	Failed to publish events caused by: write tcp 192.168.0.8:61705->[public IP]:5045: write: broken pipe
2018-07-23T16:04:02.517-0400	ERROR	pipeline/output.go:92	Failed to publish events: write tcp 192.168.0.8:61705->[public IP]:5045: write: broken pipe
2018-07-23T16:04:02.702-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: EOF
2018-07-23T16:04:02.730-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: client is not connected
2018-07-23T16:04:03.733-0400	ERROR	pipeline/output.go:92	Failed to publish events: client is not connected
2018-07-23T16:04:03.944-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: EOF
2018-07-23T16:04:03.973-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: client is not connected
2018-07-23T16:04:04.974-0400	ERROR	pipeline/output.go:92	Failed to publish events: client is not connected
2018-07-23T16:04:05.170-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: EOF
2018-07-23T16:04:05.200-0400	ERROR	logstash/async.go:235	Failed to publish events caused by: client is not connected

I'm able to fix it somehow. maybe it'll work for you too.

This my normal API/SERVICE to ingress controller

kind: Service
apiVersion: v1
metadata:
  namespace: default
  name: internal-ingress-nginx
  labels:
    name: internal-ingress-nginx
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  type: LoadBalancer
  selector:
    name: nginx-ingress-controller
  ports:
  - name: http
    port: 80
    targetPort: http
  - name: https
    port: 443
    targetPort: https

I've created a new service which point to the same ingress controller

kind: Service
apiVersion: v1
metadata:
  namespace: default
  name: internal-ingress-nginx-tcp
  labels:
    name: internal-ingress-nginx-tcp
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
  type: LoadBalancer
  selector:
    name: nginx-ingress-controller
  ports:
  - name: logstash
    port: 5044
    targetPort: logstash

The different is there's no proxy protocol in the second load balancer.
and do not forget to expose port 5044 in ingress controller too.

spec:
  terminationGracePeriodSeconds: 60
  serviceAccountName: nginx-ingress-serviceaccount
  containers:
  - name: nginx-ingress-controller
    image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2
    args:
      - /nginx-ingress-controller
      - --default-backend-service=default/nginx-errors
      - --configmap=default/configmap-ingress-conf
      - --tcp-services-configmap=default/configmap-ingress-tcp-conf
      - --annotations-prefix=nginx.ingress.kubernetes.io
    env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            fieldPath: metadata.namespace
    ports:
    - name: http
      containerPort: 80
    - name: https
      containerPort: 443
    - name: logstash
      containerPort: 5044

Fantastic. I had tried disabling the proxy feature in the ConfigMap (suspecting that the Lumberjack protocol could not handle the proxy wrapper) without success, but what made the difference was to also run the separate LoadBalancer service without the service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation.

Thanks for the tip!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.