Hello there,
I 'm setting up filebeat to send logs to logstash, on filebeat, filebeat can not tcp to the logstash on port 5044... the conection reset by peer.
Here is the filebeat log:
2018-08-24T05:52:44.501Z INFO instance/beat.go:225 Setup Beat: filebeat; Version: 6.3.2
2018-08-24T05:52:44.503Z INFO pipeline/module.go:81 Beat name: mycompany.us-east-2.compute.internal
2018-08-24T05:52:44.503Z INFO instance/beat.go:315 filebeat start running.
2018-08-24T05:52:44.503Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-08-24T05:52:44.503Z INFO registrar/registrar.go:117 Loading registrar data from /var/lib/filebeat/registry
2018-08-24T05:52:44.504Z INFO registrar/registrar.go:124 States Loaded from registrar: 4
2018-08-24T05:52:44.504Z WARN beater/filebeat.go:354 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-24T05:52:44.504Z INFO crawler/crawler.go:48 Loading Inputs: 1
2018-08-24T05:52:44.504Z INFO log/input.go:118 Configured paths: [/var/log/messages /var/log/secure]
2018-08-24T05:52:44.504Z INFO input/input.go:88 Starting input of type: log; ID: 1435945846717654616
2018-08-24T05:52:44.504Z INFO crawler/crawler.go:82 Loading and starting Inputs completed. Enabled inputs: 1
2018-08-24T05:52:44.505Z INFO cfgfile/reload.go:122 Config reloader started
2018-08-24T05:52:44.505Z INFO log/harvester.go:228 Harvester started for file: /var/log/secure
2018-08-24T05:52:44.505Z INFO log/harvester.go:228 Harvester started for file: /var/log/messages
2018-08-24T05:52:45.538Z ERROR pipeline/output.go:74 Failed to connect: read tcp x.x.x.x:51254->y.y.y.y:5044: read: connection reset by peer
2018-08-24T05:52:47.543Z ERROR pipeline/output.go:74 Failed to connect: read tcp x.x.x.x:51256->y.y.y.y:5044: read: connection reset by peer
2018-08-24T05:52:51.548Z ERROR pipeline/output.go:74 Failed to connect: read tcp x.x.x.x:51258->y.y.y.y:5044: read: connection reset by peer
^C2018-08-24T05:52:54.656Z INFO beater/filebeat.go:420 Stopping filebeat
2018-08-24T05:52:54.656Z INFO crawler/crawler.go:109 Stopping Crawler
2018-08-24T05:52:54.656Z INFO crawler/crawler.go:119 Stopping 1 inputs
2018-08-24T05:52:54.656Z INFO cfgfile/reload.go:142 Dynamic config reloader stopped
2018-08-24T05:52:54.656Z INFO input/input.go:122 input ticker stopped
2018-08-24T05:52:54.656Z INFO input/input.go:139 Stopping Input: 1435945846717654616
2018-08-24T05:52:54.656Z INFO log/harvester.go:249 Reader was closed: /var/log/secure. Closing.
2018-08-24T05:52:54.656Z INFO crawler/crawler.go:135 Crawler stopped
2018-08-24T05:52:54.656Z INFO registrar/registrar.go:339 Stopping Registrar
2018-08-24T05:52:54.657Z INFO registrar/registrar.go:265 Ending Registrar
2018-08-24T05:52:54.661Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":33}},"total":{"ticks":110,"time":{"ms":117},"value":110},"user":{"ticks":80,"time":{"ms":84}}},"info":{"ephemeral_id":"e6520a40-dbf5-4d51-afc3-4d60f406fca2","uptime":{"ms":10168}},"memstats":{"gc_next":8009984,"memory_alloc":4989728,"memory_total":14504568,"rss":19509248}},"filebeat":{"events":{"active":4116,"added":4121,"done":5},"harvester":{"closed":2,"open_files":0,"running":0,"started":2}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"read":{"errors":4},"type":"logstash","write":{"bytes":552}},"pipeline":{"clients":0,"events":{"active":4116,"failed":1,"filtered":4,"published":4116,"retry":4096,"total":4121}}},"registrar":{"states":{"current":4,"update":4},"writes":{"success":5,"total":5}},"system":{"cpu":{"cores":2},"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.025,"5":0.005}}}}}}
2018-08-24T05:52:54.662Z INFO [monitoring] log/log.go:133 Uptime: 10.171420701s
2018-08-24T05:52:54.662Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-08-24T05:52:54.662Z INFO instance/beat.go:321 filebeat stopped.
Here is the logstash. conf file:
output {
elasticsearch {
user => filebeat
password => fb-password
ssl => true
ssl_certificate_verification => false
cacert => '/etc/logstash/root-ca.pem'
action => "index"
hosts => ["x.x.x.x"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
In logstash log, I saw the following errors
=================
[2018-08-24T05:41:40,234][INFO ][org.logstash.beats.BeatsHandler] [local: x.x.x.x:5044, remote: y.y.y.y:51252] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 1
[2018-08-24T05:41:40,234][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 1
...
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.18.Final.jar:4.1.18.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 1
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-5.0.16.jar:?]
All my filebeat and logstash/elastics... are v6.3.2 on AWS cloud.
Please take a look and let me know how to workaround this...
Thank you very much
Li