I Had ELK installed on our server, which abruptly stopped working. I am getting the following error in filebeat for the same. I am using 6.8.3 versions for all
2020-02-12T08:49:17.950Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:9200))
2020-02-12T08:49:17.950Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:9200)) established
2020-02-12T08:49:18.058Z ERROR logstash/async.go:256 Failed to publish events caused by: lumberjack protocol error
2020-02-12T08:49:18.166Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-12T08:49:18.746Z INFO log/input.go:138 Configured paths: [/var/log/auth.log* /var/log/secure*]
2020-02-12T08:49:19.166Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-12T08:49:19.166Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:9200))
2020-02-12T08:49:19.166Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:9200)) established
2020-02-12T08:49:19.272Z ERROR logstash/async.go:256 Failed to publish events caused by: lumberjack protocol error
2020-02-12T08:49:19.381Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-12T08:49:20.381Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-12T08:49:20.381Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:9200))
2020-02-12T08:49:20.381Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:9200)) established
2020-02-12T08:49:20.485Z ERROR logstash/async.go:256 Failed to publish events caused by: lumberjack protocol error
2020-02-12T08:49:20.592Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-12T08:49:21.593Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-12T08:49:21.593Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:9200))
2020-02-12T08:49:21.593Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:9200)) established
2020-02-12T08:49:21.700Z ERROR logstash/async.go:256 Failed to publish events caused by: lumberjack protocol error
2020-02-12T08:49:21.802Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-12T08:49:21.878Z ERROR fileset/factory.go:105 Error creating input: Can only start an input when all related states are finished: {Id:542392-2049 Finished:false Fileinfo:0xc42032a410 Source:/home/ubuntu/our/project/log.2020-01-28 Offset:56093 Timestamp:2020-02-12 07:03:18.809805631 +0000 UTC m=+2.954352975 TTL:-1ns Type:log Meta:map[] FileStateOS:542392-2049}
2020-02-12T08:49:21.878Z ERROR [reload] cfgfile/list.go:96 Error creating runner from config: Can only start an input when all related states are finished: {Id:542392-2049 Finished:false Fileinfo:0xc42032a410 Source:/home/ubuntu/our/project/.log.2020-01-28 Offset:56093 Timestamp:2020-02-12 07:03:18.809805631 +0000 UTC m=+2.954352975 TTL:-1ns Type:log Meta:map[] FileStateOS:542392-2049}
This is my filebeat.yml file
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
#- /var/log/apache2/*.log
- /home/ubuntu/our/project/*
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
output.logstash:
# The Logstash hosts
hosts: ["localhost:9200"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
This is my Logstash.conf file
input {
beats {
port => 5044
}
}
filter {
fingerprint {
source => "message"
target => "[@metadata][fingerprint]"
method => "MURMUR3"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
document_id => "%{[@metadata][fingerprint]}"
document_type => "log"
}
}
I am getting this in my logstash logs
[2020-02-10T21:47:13,106][INFO ][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 0, from: /195.154.92.15:63300
[2020-02-10T21:47:13,106][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 0
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:375) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:342) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:325) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:255) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:246) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) [netty-all-4.1.3.Final.jar:4.1.3.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 0
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-3.1.32.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) ~[netty-all-4.1.3.Final.jar:4.1.3.Final]
I am getting this in my elasticsearch logs
[2020-02-11T10:37:56,274][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T10:45:25,668][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T10:50:27,140][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T10:57:57,431][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T12:15:00,993][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T12:20:18,469][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T12:24:48,007][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T12:35:36,133][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T14:05:02,026][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
[2020-02-11T14:08:48,908][WARN ][o.e.m.j.JvmGcMonitorService] [y4bRUbc] [gc][young][370163][784] duration [1.6s], collections [1]/[4.7s], total [1.6s]/[25.2s], memory [783.4mb]->[513.1mb]/[989.8mb], all_pools {[young] [270.9mb]->[1mb]/[273mb]}{[survivor] [1.1mb]->[1.8mb]/[34.1mb]}{[old] [511.3mb]->[511.3mb]/[682.6mb]}
[2020-02-11T14:08:48,912][INFO ][o.e.m.j.JvmGcMonitorService] [y4bRUbc] [gc][370163] overhead, spent [1.6s] collecting in the last [4.7s]
[2020-02-11T14:13:18,214][INFO ][o.e.c.m.MetaDataIndexTemplateService] [y4bRUbc] adding template [.management-beats] for index patterns [.management-beats]
Can anyone help me understand the rootcause? Please mention if any more files are needed