Logstash unable to collect logs from filebeat due to protocol mismatch

I've installed filebeat in our k8s following official elastic document (kubernetes/filebeat-kubernetes.yaml )
to collect logs of our microservices and push it to the Logstash which is installed in a different VM as a container, ELK components are installed as separate containers.

filebeat version - 7.17
ELK stack - 7.10
K8s version - v1.23.8

Filebeat starts up perfectly and I see Harvester related information in the logs as well, but on the Logstash side, I see the below error.

Where am I going wrong ? Appreciate any guidance, thanks.

ConfigMap of filebeat


apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"

# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
#  providers:
#    - type: kubernetes
#      node: ${NODE_NAME}
#      hints.enabled: true
#      hints.default_config:
#        type: container
#        paths:
#          - /var/log/containers/*${data.kubernetes.container.id}.log`

processors:
  - add_cloud_metadata:
  - add_host_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output.logstash:
  hosts: ['ELK.HOST.NAME:9192']```

Logstash pipeline's conf

input {
beats {
port => 9192
ssl => false
}
}
filter {
if [kubernetes][container_name] in ["mongo", "kafka"] {
json {
source => "log"
}
if [time] {
date {
match => [ "time" , "UNIX", "ISO8601"]
target => "time"
}
}
}

if "jta" in [kubernetes][container_name] {
json {
source => "log"
target => "api"
remove_field => "log"
}
}

}
output {
if "JIRA" in [kubernetes][container_name] {
elasticsearch {
id => "jira-issue-tracker"
hosts => ["<%= @ipaddress%>:9200"]
index => "jira-issue-tracker-%{[kubernetes][container_name]}-%{+YYYY.MM.dd}"
}
}
}

Logstash logs

[2023-06-05T18:09:57,860][INFO ][org.logstash.beats.BeatsHandler] [local: 172.1.0.2:9192, remote: 10.20.12.162:38784] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 22

[2023-06-05T18:09:57,863][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 22
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [logstash-input-tcp-6.0.6.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 22
	at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.11.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.11.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[logstash-input-tcp-6.0.6.jar:?]
	... 9 more
[2023-06-05T18:09:57,939][INFO ][org.logstash.beats.BeatsHandler] [local: 172.1.0.2:9192, remote: 10.20.12.162:38784] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
[2023-06-05T18:09:57,940][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [logstash-input-tcp-6.0.6.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
	at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.11.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.11.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[logstash-input-tcp-6.0.6.jar:?]
	... 11 more
[2023-06-05T18:10:32,612][INFO ][org.logstash.beats.BeatsHandler] [local: 172.1.0.2:9192, remote: 10.20.12.162:34442] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
[2023-06-05T18:10:32,612][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [logstash-input-tcp-6.0.6.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
	at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.11.jar:?]
	at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.11.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[logstash-input-tcp-6.0.6.jar:?]
	... 9 more
[2023-06-05T18:10:32,614][INFO ][org.logstash.beats.BeatsHandler] [local: 172.1.0.2:9192, remote: 10.27.12.162:34442] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
[2023-06-05T18:10:32,614][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69```

Hi @Chel_Db,

I did find a StackOverflow thread that has the same issue. It looks like a mismatch in TLS configuration.

Have you tried using a TCP input instead of Beats?

@carly.richmond : Thanks for getting back.
You mean collect the logs via filebeat and on the logstash's config use TCP like below ?
Does it handle the stdout of the pod/container logs also ?

input { tcp { port => 9192 } }

Also, how can I handle the certificate in the daemonset of filebeat ?
(I've a .crt file of Elasticsearch)

Ah ok, if there's a reason you're using Filebeat I don't want to mess with that.

I see you have ssl set to false. Is there a reason for that? Could this be a SSL/TLS settings mismatch? I did also find a thread using Elastic Agent rather than Beats where there was a misconfiguration of the TLS settings.

I went thru the questions on this forum and I saw the suggestion to use , ssl => false, unfortunately didnt work though.

ELK stack is TLS enabled btw.

Exactly! So shouldn't you have ssl => true along with any other required configuration?

Yes, I removed ssl => false, still seeing the protocol 22 and 3 .. Don't know where is the problem exactly..

You need the additional certificate and key information. There is an example here in the documentation which should hopefully help.

You can't use tcp input if you are using beats to send that, it will not work.

Also, it doesn't matter for the communicaton between Filebeat and Logstash that your Elasticsearch is using TLS.

Your Logstash beats input does not have tls/ssl configured, so you should not use it in your beats as well.

Your error means that something is trying to send logs to Logstash on an beats input but without using the beats protocol.

Since you are running on kubernetes/containers, do you have anything between beats and logstash, or is beats talking directly with logstash?

@leandrojmp : Filebeat is installed as Daemonset and ELK setup is configured as containers on a separate VM altogether out of K8s env.

Do you suspect any problems/issues with that, as I'm not familiar what could be of problem here. ?

As mentioned, the error you are getting normally happens when you have something send data to the Logstash beats input without using the beats protocol.

How are you running Logstash? Docker Compose or something like that?

Do you have any tool between the Filebeat and Logstash? Like a load balancer for example?

What do you have in Filebeat logs? Please share.

@leandrojmp ,

  1. How are you running Logstash? Docker Compose or something like that?
    On a single VM, 1. Elasticsearch, 2. Logstash 3. Kibana, all threee are running as individual
    containers.
    Not sure how that's been brought up as I dont have acccess to that VM.

  2. Do you have any tool between the Filebeat and Logstash? Like a load balancer for example?
    No tools in between.

  3. What do you have in Filebeat logs? Please share.

2023-06-06T13:07:46.740Z        INFO    instance/beat.go:645    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2023-06-06T13:07:46.740Z        INFO    instance/beat.go:653    Beat ID: b114ce5f-1638-47b4-b4fb-53ea0aa3328d
2023-06-06T13:07:46.753Z        INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2023-06-06T13:07:46.753Z        INFO    [beat]  instance/beat.go:981    Beat info       {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "b114ce5f-1638-47b4-b4fb-53ea0aa3328d"}}}
2023-06-06T13:07:46.753Z        INFO    [beat]  instance/beat.go:990    Build info      {"system_info": {"build": {"commit": "1428d58cf2ed945441fb2ed03961cafa9e4ad3eb", "libbeat": "7.10.0", "time": "2020-11-09T19:57:04.000Z", "version": "7.10.0"}}}
2023-06-06T13:07:46.753Z        INFO    [beat]  instance/beat.go:993    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.14.7"}}}
2023-06-06T13:07:46.762Z        INFO    [beat]  instance/beat.go:997    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2023-01-29T10:23:18Z","containerized":true,"name":"node-jt-1-worker-pool-1-fgv2n-7c67f68b88-6kmgx","ip":["127.0.0.1/8","::1/128","172.16.2.20/28","fe80::650:56ff:fe00:6005/64","fe80::5c87:86ff:fe7b:adf5/64","192.168.15.1/24","fe80::c4b4:3eff:fe76:7512/64","fe80::14e3:feff:fec2:187a/64","fe80::3f:bfff:fe86:f743/64","fe80::2009:7eff:feda:5cee/64","fe80::80fd:91ff:fed8:ecd9/64","fe80::7ca9:a3ff:feb7:9c53/64","fe80::4463:5bff:fed4:749e/64","fe80::28be:a5ff:fe92:8262/64","fe80::54a9:1aff:fea2:c42c/64","fe80::bcdf:4dff:fe1e:c597/64","fe80::2007:aeff:fec5:6715/64","fe80::a0b2:45ff:fe44:e606/64","fe80::68aa:bfff:fee3:648a/64","fe80::3c78:13ff:fe66:8977/64","fe80::5075:f3ff:fefe:4e62/64","fe80::a091:61ff:fe48:3d00/64","fe80::a8ec:24ff:fe21:5c4b/64","fe80::3c33:6eff:fe7d:d8d9/64","fe80::d8aa:a0ff:febf:6948/64","fe80::6411:dbff:fedc:a4bf/64","fe80::5c96:e4ff:fe0b:c1c6/64","fe80::2c22:45ff:fe67:3440/64","fe80::3c15:e8ff:fe7a:62e/64","fe80::6833:9dff:fedf:56a0/64","fe80::70a8:1aff:fe8b:7430/64","fe80::b472:50ff:fe41:39ca/64","fe80::8c4b:80ff:fe88:59a5/64","fe80::6ced:dff:febc:2220/64","fe80::c4a9:aaff:fe54:6d2b/64","fe80::d8b9:e5ff:feed:a54f/64","fe80::702f:c5ff:fed6:66dc/64","fe80::c7f:42ff:fe33:41d2/64","fe80::dcd3:73ff:fed1:166f/64","fe80::c828:7cff:febf:487a/64","fe80::ec9c:d1ff:fec6:88c7/64","fe80::980c:f2ff:fe09:c3f6/64","fe80::8437:30ff:febf:f691/64","fe80::828:1bff:fedb:aded/64","fe80::24af:b9ff:fecd:3895/64","fe80::f836:79ff:fe1d:265f/64","fe80::e8f0:10ff:fee2:73d0/64","fe80::28d5:fff:fe8c:8a7e/64","fe80::3486:79ff:fec9:7a13/64","fe80::74fc:18ff:fe9a:f879/64","fe80::a842:5bff:fefe:d6e9/64","fe80::c8c7:2aff:fe99:aa88/64","fe80::4073:9bff:fe3e:28a3/64","fe80::c460:82ff:fef7:cfe3/64","fe80::64d8:fcff:feeb:9240/64","fe80::c07a:6eff:fe96:8718/64","fe80::a43c:6dff:fe30:904b/64","fe80::f4a5:adff:fe3e:af9/64","fe80::f0af:81ff:fe93:f46a/64","fe80::3068:aeff:fe91:2637/64","fe80::9cd2:b0ff:fe4e:51da/64","fe80::485e:59ff:fe45:60dd/64","fe80::4894:c9ff:fe15:50d9/64","fe80::9ce0:72ff:fea1:dd8b/64","fe80::6c83:d5ff:feb6:e90a/64","fe80::c086:b5ff:fe44:7c77/64","fe80::c5a:45ff:fe8f:b121/64","fe80::f429:9ff:fedb:6332/64","fe80::854:50ff:fee2:427f/64","fe80::ac44:c4ff:fe0f:357/64","fe80::5c86:45ff:fe42:f3a0/64","fe80::48e3:7fff:fecd:7c0e/64","fe80::2c49:a6ff:feac:8b2e/64","fe80::bc6d:87ff:fe42:9336/64","fe80::84cd:b0ff:fe61:93e/64","fe80::70ab:4aff:fed7:e9d3/64"],"kernel_version":"4.19.264-6.ph3","mac":["04:50:56:00:60:05","4e:4d:7f:5a:3b:36","5e:87:86:7b:ad:f5","c6:b4:3e:76:75:12","de:c4:4f:ed:ff:e2","16:e3:fe:c2:18:7a","02:3f:bf:86:f7:43","22:09:7e:da:5c:ee","82:fd:91:d8:ec:d9","7e:a9:a3:b7:9c:53","46:63:5b:d4:74:9e","2a:be:a5:92:82:62","56:a9:1a:a2:c4:2c","be:df:4d:1e:c5:97","22:07:ae:c5:67:15","a2:b2:45:44:e6:06","6a:aa:bf:e3:64:8a","3e:78:13:66:89:77","52:75:f3:fe:4e:62","a2:91:61:48:3d:00","aa:ec:24:21:5c:4b","3e:33:6e:7d:d8:d9","da:aa:a0:bf:69:48","66:11:db:dc:a4:bf","5e:96:e4:0b:c1:c6","2e:22:45:67:34:40","3e:15:e8:7a:06:2e","6a:33:9d:df:56:a0","72:a8:1a:8b:74:30","b6:72:50:41:39:ca","8e:4b:80:88:59:a5","6e:ed:0d:bc:22:20","c6:a9:aa:54:6d:2b","da:b9:e5:ed:a5:4f","72:2f:c5:d6:66:dc","0e:7f:42:33:41:d2","de:d3:73:d1:16:6f","ca:28:7c:bf:48:7a","ee:9c:d1:c6:88:c7","9a:0c:f2:09:c3:f6","86:37:30:bf:f6:91","0a:28:1b:db:ad:ed","26:af:b9:cd:38:95","fa:36:79:1d:26:5f","ea:f0:10:e2:73:d0","2a:d5:0f:8c:8a:7e","36:86:79:c9:7a:13","76:fc:18:9a:f8:79","aa:42:5b:fe:d6:e9","ca:c7:2a:99:aa:88","42:73:9b:3e:28:a3","c6:60:82:f7:cf:e3","66:d8:fc:eb:92:40","c2:7a:6e:96:87:18","a6:3c:6d:30:90:4b","f6:a5:ad:3e:0a:f9","f2:af:81:93:f4:6a","32:68:ae:91:26:37","9e:d2:b0:4e:51:da","4a:5e:59:45:60:dd","4a:94:c9:15:50:d9","9e:e0:72:a1:dd:8b","6e:83:d5:b6:e9:0a","c2:86:b5:44:7c:77","0e:5a:45:8f:b1:21","f6:29:09:db:63:32","0a:54:50:e2:42:7f","ae:44:c4:0f:03:57","5e:86:45:42:f3:a0","4a:e3:7f:cd:7c:0e","2e:49:a6:ac:8b:2e","be:6d:87:42:93:36","86:cd:b0:61:09:3e","72:ab:4a:d7:e9:d3"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":8,"patch":2003,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2023-06-06T13:07:46.762Z        INFO    [beat]  instance/beat.go:1026   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2023-06-06T13:07:46.080Z"}}}
2023-06-06T13:07:46.762Z        INFO    instance/beat.go:299    Setup Beat: filebeat; Version: 7.10.0
2023-06-06T13:07:46.763Z        INFO    [publisher]     pipeline/module.go:113  Beat name: node-jt-1-worker-pool-1-fgv2n-7c67f68b88-6kmgx
2023-06-06T13:07:46.763Z        WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2023-06-06T13:07:46.764Z        INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2023-06-06T13:07:46.764Z        INFO    instance/beat.go:455    filebeat start running.
2023-06-06T13:07:46.765Z        INFO    memlog/store.go:119     Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=2296715
2023-06-06T13:07:47.062Z        INFO    memlog/store.go:124     Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=2320908
2023-06-06T13:07:47.062Z        WARN    beater/filebeat.go:381  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2023-06-06T13:07:47.063Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 91
2023-06-06T13:07:47.063Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2023-06-06T13:07:47.069Z        INFO    add_kubernetes_metadata/kubernetes.go:71        add_kubernetes_metadata: kubernetes env detected, with version: v1.23.8+vmware.3
2023-06-06T13:07:47.069Z        INFO    [kubernetes]    kubernetes/util.go:99   kubernetes: Using node node-jt-1-worker-pool-1-fgv2n-7c67f68b88-6kmgx provided in the config {"libbeat.processor": "add_kubernetes_metadata"}
2023-06-06T13:07:47.188Z        INFO    log/input.go:157        Configured paths: [/var/log/containers/*.log]
2023-06-06T13:07:47.188Z        INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 8013606194523954687)
2023-06-06T13:07:47.188Z        INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1
2023-06-06T13:07:47.189Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/sabreapilinter-869c9bb-sxzg2_lion-ac-vrxrack_sabreapilinter-a9d06b09dd4ae9185a04034e93a2a99f15b2784cc9ccabe451057bc9d4085724.log
2023-06-06T13:07:47.190Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/compliance-checker-7b5d5f7567-tgq7q_lion-ac-cloudiq_compliance-checker-a63059733b8a714569b2f21960558dbd19f14c1ebca25f7f5971942623256381.log
2023-06-06T13:07:49.741Z        INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:89     add_cloud_metadata: hosting provider type not detected.
2023-06-06T13:07:49.741Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/filebeat-4xwwl_logging_filebeat-d106f5fe7cc5381929c40d746e3483f6124d8d03d64b15242371f5e3ba0fd2cb.log
2023-06-06T13:07:49.741Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/sabreapilinter-547c775f6-f27cx_lion-ac-omem_sabreapilinter-5f0ec1c315fefa176afdbde3a0e6ccd86895d5e70038d991b791a09c1c784ae1.log
2023-06-06T13:07:49.742Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/fluentd-srv29_fluentd-logging_fluentd-a87dcb0fede3dd1a78d6f1ce7ed04ac0b25afa263b936578fef683b39e9a6d83.log
2023-06-06T13:07:49.742Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/sabreapilinter-5db4c6b698-d4jkf_lion-ac-pdoc_sabreapilinter-014f1107cd6a13635bbfc4d984dc631c01b88c4631e22237643c4234075a5218.log
2023-06-06T13:07:49.743Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/compliance-checker-5d5fc6665-chlj4_lion-ac-edge_compliance-checker-eb544c187bca92b8e80345c4b00d87963577d0a1c5892e6f83c143823459696a.log
2023-06-06T13:07:49.744Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/compliance-checker-8569846576-g7vpt_lion-ac-fulcrum_compliance-checker-b65dcb02605f3d05ff955610a498d72c4b7c22f103d5c336ecac7dc6c1ea55c2.log
2023-06-06T13:07:49.744Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/compliance-checker-86d4c4fbcb-dwps4_lion-ac-valtool_compliance-checker-c44ad13df96fdcdb8f9c61711d4d43cabf02dca1988efb134309046fa54a51aa.log
2023-06-06T13:07:49.745Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/event-dispatcher-5cc9cc8dd-j5hp7_lion-ac-fulcrum_event-dispatcher-860bc22f630e996a7b93c97ca58bd394a69994ad3c4043893f3e86c781b24dab.log
2023-06-06T13:07:49.746Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/vsphere-csi-node-2zvlf_vmware-system-csi_vsphere-csi-node-f4a3706c58bdc9e9b9b135321f2c7c5f2df70d15e304b853f02841ba84496fb9.log
2023-06-06T13:07:49.746Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/antrea-agent-25bgn_kube-system_antrea-agent-e751231a13ac0b356c233e59003e8bfc8900947fe9bcc2456a0ccdaf69d5f32e.log
2023-06-06T13:07:49.746Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/event-dispatcher-75dbc9df8-m2pvf_lion-ac-vxfm_event-dispatcher-afb176ea0302bbd174a9e085952a0b4befe8233f06d7de4ef6c474ae38b9b2b4.log
2023-06-06T13:07:49.746Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/sabreapilinter-568ff4b86b-8t2w5_lion-ac-valtool_sabreapilinter-7ffb147328901c91c24bb8a1def7c87a708c568c6bef3c717f54f3546809af4b.log
2023-06-06T13:07:50.741Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2023-06-06T13:07:50.741Z        INFO    [publisher]     pipeline/retry.go:223     done
2023-06-06T13:07:50.741Z        INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://elk.my.org.com:9192))
2023-06-06T13:07:50.745Z        INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(async(tcp://elk.my.org.com:9192)) established
2023-06-06T13:07:59.748Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/jira-updater-8479696c4d-j4rwp_lion-ac-cc_jira-updater-087de6212b009389aa2d84fe0d2c98e2a2b7873f4d99f7cc8b2a6c17fe50e6e6.log
2023-06-06T13:07:59.748Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/event-dispatcher-666f8dbcf-9kxrd_lion-ac-krv_event-dispatcher-449d81a7634de294c062c261a34d447ef4c0a7d130c73ca85eb04007b6df3868.log
  INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/jira-updater-8479696c4d-j4rwp_lion-ac-cc_jira-updater-087de6212b009389aa2d84fe0d2c98e2a2b7873f4d99f7cc8b2a6c17fe50e6e6.log
2023-06-06T13:07:59.748Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/event-dispatcher-666f8dbcf-9kxrd_lion-ac-krv_event-dispatcher-449d81a7634de294c062c261a34d447ef4c0a7d130c73ca85eb04007b6df3868.log
2023-06-06T13:08:09.750Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/event-dispatcher-6877c8b95d-l9dkk_lion-ac-ecs_event-dispatcher-c8d779396973c357df65f50c803cad9b8f1723f20fdb0f7debc406f0fc27bba5.log
2023-06-06T13:08:16.765Z        INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":142}},"total":{"ticks":810,"time":{"ms":819},"value":810},"user":{"ticks":670,"time":{"ms":677}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":28},"info":{"ephemeral_id":"22325326-6fe2-4bde-ae7d-8a9c519d42d9","uptime":{"ms":30067}},"memstats":{"gc_next":29889056,"memory_alloc":20126880,"memory_total":201264440,"rss":91955200},"runtime":{"goroutines":121}},"filebeat":{"events":{"active":1,"added":371,"done":370},"harvester":{"open_files":17,"running":17,"started":17},"input":{"log":{"files":{"truncated":1}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":262,"batches":16,"total":262},"read":{"bytes":96},"type":"logstash","write":{"bytes":108920}},"pipeline":{"clients":1,"events":{"active":1,"filtered":108,"published":263,"retry":168,"total":371},"queue":{"acked":262}}},"registrar":{"states":{"current":91,"update":370},"writes":{"success":112,"total":112}},"system":{"cpu":{"cores":8},"load":{"1":0.24,"15":0.48,"5":0.45,"norm":{"1":0.03,"15":0.06,"5":0.0563}}}}}}
2023-06-06T13:08:46.765Z        INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":180,"time":{"ms":47}},"total":{"ticks":880,"time":{"ms":73},"value":880},"user":{"ticks":700,"time":{"ms":26}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":28},"info":{"ephemeral_id":"22325326-6fe2-4bde-ae7d-8a9c519d42d9","uptime":{"ms":60065}},"memstats":{"gc_next":30010288,"memory_alloc":28283152,"memory_total":224017704,"rss":5222400},"runtime":{"goroutines":121}},"filebeat":{"events":{"active":-1,"added":32,"done":33},"harvester":{"open_files":17,"running":17}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":33,"batches":9,"total":33},"read":{"bytes":54},"write":{"bytes":28694}},"pipeline":{"clients":1,"events":{"active":0,"published":32,"total":32},"queue":{"acked":33}}},"registrar":{"states":{"current":91,"update":33},"writes":{"success":9,"total":9}},"system":{"load":{"1":0.22,"15":0.46,"5":0.43,"norm":{"1":0.0275,"15":0.0575,"5":0.0538}}}}}}
2023-06-06T13:09:16.765Z        INFO    [monitoring]    log/log.go:145  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":210,"time":{"ms":27}},"total":{"ticks":990,"time":{"ms":105},"value":990},"user":{"ticks":780,"time":{"ms":78}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":28},"info":{"ephemeral_id":"22325326-6fe2-4bde-ae7d-8a9c519d42d9","uptime":{"ms":90066}},"memstats":{"gc_next":30032240,"memory_alloc":17753784,"memory_total":257494208},"runtime":{"goroutines":121}},"filebeat":{"events":{"added":56,"done":56},"harvester":{"open_files":17,"running":17}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":56,"batches":14,"total":56},"read":{"bytes":84},"write":{"bytes":45543}},"pipeline":{"clients":1,"events":{"active":0,"published":56,"total":56},"queue":{"acked":56}}},"registrar":{"states":{"current":91,"update":56},"writes":{"success":14,"total":14}},"system":{"load":{"1":0.13,"15":0.45,"5":0.38,"norm":{"1":0.0163,"15":0.0563,"5":0.0475}}}}}}
2023-06-06T13:09:19.764Z        INFO    log/harvester.go:302    Harvester started for file: /var/log/containers/cisco-vultr-product-registry-api-764686b76b-kjcbg_lion-ac-global_cisco-vultr-product-registry-api-295f6fda82338cd22d0f8ef454428d91ce2365fa60cca4df815649f386faadf6.log

From your filebeat log it is connecting to logstash without any issues.

2023-06-06T13:07:50.741Z INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(async(tcp://elk.my.org.com:9192))
2023-06-06T13:07:50.745Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(async(tcp://elk.my.org.com:9192)) established

I just noted something in your pipeline, you have a couple of conditionals on the field [kubernetes][container_name], from where did you get this field?

You are not parsing your message, not sure you have this field in your event.

Also, not sure what your k8s work as I do not have experience with kubernetes, but your Logstash error has the following information:

[2023-06-05T18:09:57,860][INFO ][org.logstash.beats.BeatsHandler] [local: 172.1.0.2:9192, remote: 10.20.12.162:38784] Handling exception: org.logstash.beats.InvalidFrameProtocolException:

This means that something with this IP address is trying to send logs to your logstash on the beats port without using the beats protocol. What is this IP? Is this the output IP of your kubernetes cluster?

If you check this line in your filebeat log you will not find that ip address.

2023-06-06T13:07:46.762Z INFO [beat] instance/beat.go:997 Host info

My suggestion is to remove all conditionals you have in your Logstash pipeline to see if it is really Filebeat that have an issue or if is there something else trying to send data to the same port.

Try to run this pipeline in your Logstash:

input {
    beats {
        port => 9192
        ssl => false
    }
}
output {
    elasticsearch {
    hosts => ["<%= @ipaddress%>:9200"]
    index => "troubleshoot-filebeat-%{+YYYY.MM.dd}"
    }
}
1 Like

@leandrojmp : Thanks a lot for your inputs, let me try the suggestion and get back to you..

@leandrojmp : Really appreciate your help here !
The basic config of logstash worked and I see the index been created.

What could be the issues with the existing config .. Were the logs not getting matched or filter section is somewhere interfering ? I'm not able to comprehend what's wrong with existing settings.

filter {
  if [kubernetes][container_name] in ["arrig", "jsoc"]  {
    json {
          source => "log"
    }
    if [time] {
      date {
        match => [ "time" , "UNIX", "ISO8601"]
        target => "time"
      }
    }
  }

  if "alpha" in [kubernetes][container_name]  {
    json {
          source => "log"
          target => "api"
          remove_field => "log" 
    }
  }   
  if "-delta" in [kubernetes][container_name] {
    json {
          source => "log"
          target => "delta"
          remove_field => "delta" 
    }
  }      
  if "jira-logger-" in [kubernetes][container_name] {
    json {
          source => "log"
          target => "jira"
          remove_field => "log" 
    }
  }
  if "source-checker" in [kubernetes][container_name] {
    json {
      source => "log"
      target => "pr_builder"
      remove_field => "log"
    }
  }
      
}
output {
  
  if "source-checker" in [kubernetes][container_name] {
    elasticsearch {
      id => "source-checker"
      hosts => ["<%= @ipaddress%>:9200"]
      index => "ource-checker-%{[kubernetes][container_name]}-%{+YYYY.MM.dd}"
    }
  }   
  so on
}

The issue is that you are filtering on some fields that may not exist in your document.

For example in your output you have

if "value" in [kubernetes][container_name]

If your document does not have the field [kubernetes][container_name] it will never match and then you will never see this in elasticsearch.

You need to provide a sampe of the message you are receiving in your elasticsearch with the pipeline without the filters.

1 Like

@leandrojmp : But the existing conditions will be seen right ?
Let's say, I've 10 patterns inside filter, out of which 6 matches the log or log patterns, those 6 will be seen in the Kibana, correct ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.