Logstash going to failed state after restart

Hi Team,

i am configured ELK stack for visualizing Squid logs in Kibana.
i use filebeat in squid server to send the log to logstash.
In logstash i filtered the squid logs using grok . but after restart logstash it goes to failed state.

please help me on this. Thanks in advance .

Hear is my logstash configuration

vim /etc/logstash/conf.d/squid.conf

input {
beats{
port => 5044
}
}

filter {
grok {
match => {
"message" => "%{NUMBER:timestamp}%{SPACE}%{NUMBER:duration}\s%{IP:client_address}\s%{WORD:cache_result}/%{POSINT:status_code}\s%{NUMBER:bytes}\s%{WORD:request_method}\s%{NOTSPACE:url}\s%{NOTSPACE:user}\s%{WORD:hierarchy_code}/%{NOTSPACE:server}\s%{NOTSPACE:content_type}"
}
remove_field => ["message"]
}
}
output {
elasticsearch {
host => ["localhost:9200"]
}
}

Hear is the logstash log.
#tail -f /var/log/logstash/logstash-plain.log

[2019-07-08T20:10:13,822][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-07-08T20:10:18,635][INFO ][logstash.runner ] Logstash shut down.
[2019-07-08T20:10:56,590][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-08T20:11:08,061][ERROR][logstash.outputs.elasticsearch] Unknown setting 'host' for elasticsearch
[2019-07-08T20:11:08,177][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Something is wrong with your configuration.", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/config/mixin.rb:87:in config_init'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:60:ininitialize'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:232:in initialize'", "org/logstash/config/ir/compiler/OutputDelegatorExt.java:48:ininitialize'", "org/logstash/config/ir/compiler/OutputDelegatorExt.java:30:in initialize'", "org/logstash/plugins/PluginFactoryExt.java:242:inplugin'", "org/logstash/plugins/PluginFactoryExt.java:140:in buildOutput'", "org/logstash/execution/JavaBasePipelineExt.java:50:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:24:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:inexecute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-07-08T20:11:08,925][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-07-08T20:11:13,589][INFO ][logstash.runner ] Logstash shut down.
[2019-07-08T20:11:58,030][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-08T20:12:09,005][ERROR][logstash.outputs.elasticsearch] Unknown setting 'host' for elasticsearch

Please help me..

Elasticsearch output plugin parameter is "hosts", not "host".

Hi Rugenl,

Thanks for assist. Its working now.

But the squid logs are not updating in Kibana.

Logstash logs

cat logstash-plain.log

[2019-07-08T20:55:30,503][INFO ][org.logstash.beats.BeatsHandler] [local: 192.168.60.128:5044, remote: 192.168.60.129:57439] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: -1
[2019-07-08T20:55:30,503][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: -1
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:472) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:405) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:372) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:38) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:236) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.30.Final.jar:4.1.30.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Caused by: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: -1
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:92) ~[logstash-input-beats-6.0.0.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-all-4.1.30.Final.jar:4.1.30.Final]
... 10 more

Run tcpdump -i any port 5044, are you getting data?

I think this is an error I see on idle pipelines.

You can also use the logstash node stats api https://www.elastic.co/guide/en/logstash/current/node-stats-api.html

yes i getting data in 5044 port.

tcpdump -i any port 5044
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
18:10:27.763991 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [S], seq 3259728290, win 29200, options [mss 1460,sackOK,TS val 3061742 ecr 0,nop,wscale 6], length 0
18:10:27.764063 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [S.], seq 1481446807, ack 3259728291, win 28960, options [mss 1460,sackOK,TS val 3598837 ecr 3061742,nop,wscale 7], length 0
18:10:27.768403 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [.], ack 1, win 457, options [nop,nop,TS val 3061747 ecr 3598837], length 0
18:10:27.769812 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 1:3, ack 1, win 457, options [nop,nop,TS val 3061748 ecr 3598837], length 2
18:10:27.769830 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 3, win 227, options [nop,nop,TS val 3598843 ecr 3061748], length 0
18:10:27.769937 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 3:7, ack 1, win 457, options [nop,nop,TS val 3061748 ecr 3598837], length 4
18:10:27.769943 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 7, win 227, options [nop,nop,TS val 3598843 ecr 3061748], length 0
18:10:27.770068 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 7:9, ack 1, win 457, options [nop,nop,TS val 3061748 ecr 3598843], length 2
18:10:27.770073 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 9, win 227, options [nop,nop,TS val 3598843 ecr 3061748], length 0
18:10:27.770228 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 9:13, ack 1, win 457, options [nop,nop,TS val 3061749 ecr 3598843], length 4
18:10:27.770234 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 13, win 227, options [nop,nop,TS val 3598843 ecr 3061749], length 0
18:10:27.770327 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 13:299, ack 1, win 457, options [nop,nop,TS val 3061749 ecr 3598843], length 286
18:10:27.770332 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 299, win 235, options [nop,nop,TS val 3598843 ecr 3061749], length 0
18:10:27.883866 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [P.], seq 1:7, ack 299, win 235, options [nop,nop,TS val 3598957 ecr 3061749], length 6
18:10:27.884659 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [.], ack 7, win 457, options [nop,nop,TS val 3061863 ecr 3598957], length 0
18:10:54.258317 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 299:301, ack 7, win 457, options [nop,nop,TS val 3088237 ecr 3598957], length 2
18:10:54.258367 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 301, win 235, options [nop,nop,TS val 3625331 ecr 3088237], length 0
18:10:54.258407 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 301:305, ack 7, win 457, options [nop,nop,TS val 3088237 ecr 3598957], length 4
18:10:54.258412 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 305, win 235, options [nop,nop,TS val 3625331 ecr 3088237], length 0
18:10:54.258440 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 305:307, ack 7, win 457, options [nop,nop,TS val 3088237 ecr 3598957], length 2
18:10:54.258447 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 307, win 235, options [nop,nop,TS val 3625331 ecr 3088237], length 0
18:10:54.258472 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 307:311, ack 7, win 457, options [nop,nop,TS val 3088237 ecr 3598957], length 4
18:10:54.258506 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 311, win 235, options [nop,nop,TS val 3625332 ecr 3088237], length 0
18:10:54.258532 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [P.], seq 311:599, ack 7, win 457, options [nop,nop,TS val 3088237 ecr 3598957], length 288
18:10:54.258536 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [.], ack 599, win 243, options [nop,nop,TS val 3625332 ecr 3088237], length 0
18:10:54.262602 IP 192.168.60.128.lxi-evntsvc > 192.168.60.129.46762: Flags [P.], seq 7:13, ack 599, win 243, options [nop,nop,TS val 3625336 ecr 3088237], length 6
18:10:54.263042 IP 192.168.60.129.46762 > 192.168.60.128.lxi-evntsvc: Flags [.], ack 13, win 457, options [nop,nop,TS val 3088241 ecr 3625336], length 0

here is my logstash logs

tail -f /var/log/logstash/logstash-plain.log

[2019-07-09T18:00:54,297][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-07-09T18:00:54,529][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-07-09T18:00:54,759][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-07-09T18:00:55,104][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-07-09T18:00:55,112][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x23bc33ba run>"}
[2019-07-09T18:00:56,451][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-07-09T18:00:56,483][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-07-09T18:00:56,760][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-07-09T18:00:56,770][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-07-09T18:00:57,914][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Are you getting any indices? You didn't specify an index name, I think it will be logstash-YYYY-MM-DD. Do you have any indices with today's date?

Sorry rugenl, i am new to ELK stack . how could i define the indices .

i am getting this warning .

[2019-07-09T22:34:02,774][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-07-09T22:34:04,184][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.

Kibana monitoring, elasticsearch, indices tab

yes i have defined index in logstash .

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{index}-%{+YYYY.MM.dd}"
document_type => "%{type}"
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.