Fail to ship data from Metribeat to Logstash

Hello,

Im pretty new to the ELK stack and i need your help to identify where my problem come from, please.

I installed ElasticSearch, Kibana and logstash and they are working well. I used with sample data to be more familiar with ELK. Then i installed Metricbeat and Filebeat to ship data to logstash. It has been 3 days im looking for people who got the same problems on the god google but im kinda stuck.

My metribeat.yml is like:

metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
setup.kibana:
host: "localhost:5601"
output.logstash: hosts: ["localhost:5044"] loadbalance: true ssl.enabled: true

For logstash, my pipelines.yml isnt configured,
in logstash.yml i only wrote the name of my node

node.name: logstash_test

and i start logstash with this conf :

input { beats{ port => 5044 } } output { elasticsearch { hosts => ["localhost:9200"] index => "indexforlogstash" } }

When i run
metribeat.exe -c metribeat.yml -e
after ElasticS Kibana and Logstash were launched, i get this error on metricbeat:

Failed to connect to backoff(async(tcp://localhost:5044)): tls: first record does not look like a TLS handshake

And on the logstash instance i get those errors :

[org.logstash.beats.BeatsHandler][main] [local: 127.0.0.1:5044, remote: 127.0.0.1:62671] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71 [2019-11-21T12:23:20,132][WARN ][io.netty.channel.DefaultChannelPipeline][main] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

I hope I gave you all informations to be understood, have a great day.

Best regards !

Your problem seems to appear because you enable SSL in Metricbeat's config, but NOT in Logstash's config.

Either enable them on both sides (Metricbeat config and Logstash "input" section) or disable it on both sides.

I disabled it on Metribeat config, it still doesnt work.

I gonna try to enable it both side, but i m changing metricbeat to filebeat ( i first need logs, i will see in the future for metrics)

Do we agree, if i allow ssl, tcp is allowed too ?

Last thought, if i allow ssl, do i need to create cert etc.. ?

Thanks for your time,

Best regards.

I m still working on it, and now my Filebeat instance tells me :

2019-11-21T16:29:30.010+0100 INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 24 reconnect attempt(s) 2019-11-21T16:29:45.215+0100 INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1125,"time":{"ms":32}},"total":{"ticks":2015,"time":{"ms":32},"value":2015},"user":{"ticks":890}},"handles":{"open":257},"info":{"ephemeral_id":"f5fc5de6-5bf9-4e49-9847-adcd7ea371d5","uptime":{"ms":930993}},"memstats":{"gc_next":33812368,"memory_alloc":17986928,"memory_total":54735736,"rss":20480},"runtime":{"goroutines":81}},"filebeat":{"harvester":{"open_files":2,"running":3}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"pipeline":{"clients":9,"events":{"active":4117,"retry":2048}}},"registrar":{"states":{"current":5}}}}} 2019-11-21T16:30:04.675+0100 ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp 127.0.0.1:5044: connectex: Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée. 2019-11-21T16:30:04.675+0100 INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 25 reconnect attempt(s)

And my Logstash instance tells :

Thread.exclusive is deprecated, use Thread::Mutex Sending Logstash logs to C:/Users/T6SH/Desktop/Logstash/logs which is now configured via log4j2.properties [2019-11-21T16:22:49,405][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2019-11-21T16:22:49,432][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.2"} [2019-11-21T16:22:52,189][INFO ][org.reflections.Reflections] Reflections took 109 ms to scan 1 urls, producing 20 keys and 40 values [2019-11-21T16:22:53,966][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2019-11-21T16:22:54,362][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"} [2019-11-21T16:22:54,431][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7} [2019-11-21T16:22:54,438][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the typeevent field won't be used to determine the document _type {:es_version=>7} [2019-11-21T16:22:54,477][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]} [2019-11-21T16:22:54,588][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template [2019-11-21T16:22:54,658][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [2019-11-21T16:22:54,680][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x44600497 run>"} [2019-11-21T16:22:54,694][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [2019-11-21T16:22:57,501][ERROR][logstash.javapipeline ][main] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<LogStash::ConfigurationError: Certificate or Certificate Key not configured>, :backtrace=>["C:/Users/T6SH/Desktop/Logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:144:inregister'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:195:in block in register_plugins'", "org/jruby/RubyArray.java:1800:in each'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:194:in register_plugins'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:296:in start_inputs'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:252:in start_workers'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:149:in run'", "C:/Users/T6SH/Desktop/Logstash/logstash-core/lib/logstash/java_pipeline.rb:108:in block in start'"], :thread=>"#<Thread:0x44600497 run>"} [2019-11-21T16:22:57,526][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil} [2019-11-21T16:22:58,380][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2019-11-21T16:23:03,110][INFO ][logstash.runner ] Logstash shut down.

, the shut down is done automatically...

In filebeat.yml, i did enable ssl : ssl.enabled: true

In logstash.conf, i did wrote input { beats{ port => 5044 ssl => true } }

I put the same username and password in EL, KI, LO and FI ( elastic, kibana, logstash, filebeat)

With both ssl disable, Logstash tells me : invalid version of protocol 22 and 3

and Filebeat :

ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp 127.0.0.1:5044: connectex: Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée.

ERROR fileset/factory.go:105 Error creating input: No paths were defined for input accessing config

ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): tls: first record does not look like a TLS handshake

Im working 2 days per week on this subject during my studies, and nobody at my work know those technologies... So i give here the most informations i can give ^^

Thanks if you did read what i wrote, best regards !

Damn it ... I had two lines saying ssl enabled true and the other one was hidding at the bottom of my yml, set as false ...

Its now working !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.