Can't recieve filebeat logs via Logstash


I have a problem with recieving logs via logstash from filebeat. I can recieve them directly to elasticsearch but not via Logastash.

Logstash conf.d input file:

input {
    beats {
        type => "filebeat"
        port => "5044"

Filebeat config:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: [""]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
  hosts: [""]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

I don't find the problem in the logs. How can I test the connection?
Later I want to add this filter: for Odoo logs. But I'm far away from that. The netflow module is working fine this is also configured in Logstash.

Did you check both the filebeat and Logstash logs? Also you need an output from Logstash to Elasticsearch.

input {
  beats {
    port => 5044
output {
  elasticsearch {
    hosts => [""]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

Make sure that your plugins for Logstash are enabled as well.

Hi Ryan,

Thanks for your reply. I have added in the output the missing code. But id does not help.
I found in the log the following:

Apr 23 06:23:52 SVGWMA-XXXX-04 filebeat: 2019-04-23T06:23:52.302+0200#011ERROR#011pipeline/output.go:100#011Failed to connect to backoff(async(tcp:// dial tcp connect: connection refused
Apr 23 06:23:52 SVGWMA-XXXXX-04 filebeat: 2019-04-23T06:23:52.302+0200#011INFO#011pipeline/output.go:93#011Attempting to reconnect to backoff(async(tcp:// with 736 reconnect attempt(s)
Apr 23 06:23:52 SVGWMA-XXXXX-04 filebeat: 2019-04-23T06:23:52.302+0200#011DEBUG#011[logstash]#011logstash/async.go:111#011connect
Apr 23 06:23:54 SVGWMA-XXXXX-04 filebeat: 2019-04-23T06:23:54.925+0200#011INFO#011[monitoring]#011log/log.go:144#011Non-zero metrics in the last 30s#011{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":6640,"time":{"ms":9}},"total":{"ticks":27590,"time":{"ms":11},"value":27590},"user":{"ticks":20950,"time":{"ms":2}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"62baf388-b30e-48ac-a007-b6bbf26713aa","uptime":{"ms":33303045}},"memstats":{"gc_next":44016864,"memory_alloc":22763448,"memory_total":492209632}},"filebeat":{"harvester":{"open_files":3,"running":2}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":7,"events":{"active":4118,"retry":2048}}},"registrar":{"states":{"current":8}},"system":{"load":{"1":0.02,"15":0.05,"5":0.03,"norm":{"1":0.01,"15":0.025,"5":0.015}}}}}}
Apr 23 06:23:57 SVGWMA-XXXX-04 filebeat: 2019-04-23T06:23:57.054+0200#011DEBUG#011[input]#011input/input.go:152#011Run input
Apr 23 06:23:57 SVGWMA-XXXX-04 filebeat: 2019-04-23T06:23:57.054+0200#011DEBUG#011[input]#011log/input.go:174#011Start next scan
Apr 23 06:23:57 SVGWMA-XXXXX-04 filebeat: 2019-04-23T06:23:57.054+0200#011DEBUG#011[input]#011log/input.go:404#011Check file for harvesting: /var/log/audit/audit.log

So filebeats is running and collection and also sends to the correct ip. Connection refused... but why?

1 Like

@hispeed What about your firewall and selinux configuration in logstash node? Make sure that firewall is not restricting the connection.

Hi @Debashis

Both is disabled on both machines.

Do you have xpack enabled on your ES cluster or Logstash? With the connection refused error you may need to pass your username and password credentials for ES.

No X-Pack is not enabled also not in logstash.

# ------------ X-Pack Settings (not applicable for OSS build)--------------
# X-Pack Monitoring
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
# X-Pack Management
# false ["main", "apache_logs"] logstash_admin_user password ["https://es1:9200", "https://es2:9200"] [ "/path/to/ca.crt" ] /path/to/file password /path/to/file password certificate false 5s

If you've created a username and password for logstash_internal try passing those credentials through in your output config. The failed to connect message is whats most important to troubleshot. Also, take a look at what indices your logstash_writer role has access to. Your probably going to have to add filebeat* to its permissions.

output {
  elasticsearch {
    hosts => [""]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "logstash_internal"
    password => "team_password"
    #ssl => true
    #ssl_certificate_verification => false
    #cacert => "/etc/logstash/globalcert/ca/ca.crt"

I have not created anyhting with X-Pack or a role in kibana. So this is probably not the problem. I think that Logstash is not starting the pipeline.

[2019-04-23T17:28:21,020][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2019-04-23T17:28:21,928][INFO ][logstash.javapipeline    ] Pipeline terminated {""=>"module-netflow"}
[2019-04-23T17:28:52,557][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-23T17:28:52,576][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.0"}
[2019-04-23T17:28:54,015][INFO ][logstash.config.modulescommon] Starting the netflow module
[2019-04-23T17:29:16,769][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-04-23T17:29:17,100][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-04-23T17:29:17,182][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-04-23T17:29:17,188][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-04-23T17:29:17,294][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-04-23T17:29:18,175][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,234][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:18,242][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:18,840][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,917][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:18,921][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:19,455][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-City.mmdb"}
[2019-04-23T17:29:19,679][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.0-java/vendor/GeoLite2-ASN.mmdb"}
[2019-04-23T17:29:19,752][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"module-netflow", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x6d07a31 run>"}
[2019-04-23T17:29:19,994][INFO ][logstash.javapipeline    ] Pipeline started {""=>"module-netflow"}
[2019-04-23T17:29:20,246][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>""}
[2019-04-23T17:29:20,296][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"module-netflow"], :non_running_pipelines=>[]}
[2019-04-23T17:29:20,599][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"", :receive_buffer_bytes=>"212992", :queue_size=>"2000"}
[2019-04-23T17:29:21,407][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-23T17:30:02,782][WARN ][logstash.codecs.netflow  ] Can't (yet) decode flowset id 1024 from source id 0, because no template to decode it with has been received. This message will usually go away after 1 minute.

How Can I check if the input 5044 is working and the pipeline is up and running?

According to this your pipeline is running.

This is going back a way for me but if I remember correctly I had to copy my config and copy it into the Pipelines tool in Kibana to get things working. I'm not sure why that worked but it did. After that I had to make sure that the logstash_writer role, assuming you've created one, had permissions to write, delete and create indices for the specific beat.

I can't connect via telnet to logstash so the problem is probably logstash.
I hope someone from logstash is reading this?

I recieve: connection refused when I try via telnet.

Try explicitly setting host => in the beats input configuration.

Hi Badger,

Doesn't seem to work.

What was happening here?

I think this is the real problem i have. It's not picking up the config files. The netflow module is working fine and I have in conf.d the configuration which it should pick. At the moment pipelines.yml is commented out.


This all happened because of the netflow module which i had activated. Does anyone actually know how the netflow module should be started and configured. (It would be nice stept by step on CentOS 7)?
@theuntergeek and @guyboertje
I'm sorry for adding you in this but guyboertje this is a information problem look at:

When I deactivate the netflow module i recieve the filebeat notfications. So the question is, how to start and configure the netflow module.


Refer this blog

Hi jawad846,

Thanks for your link but it doesn't help me a lot.
Also if I look on GitHub it looks to me that the Netflow at least won't be supported in the future.

Any statements?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.