Filebeat logs not making it to Logstash

Hi, for some reason remote server that has Filebeat setup on it is not being able to successfully send logs to my elk server that has logstash running on it. I ran filebeat -e -d "publish,logstash and got the following warning/errors:

2020-10-22T06:27:21.311Z        INFO    [publisher]     pipeline/module.go:113  Beat name: choco-server
2020-10-22T06:27:21.311Z        WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-10-22T06:27:21.311Z        INFO    instance/beat.go:450    filebeat start running.
2020-10-22T06:27:21.312Z        INFO    memlog/store.go:119     Loading data file of '/var/lib/filebeat/registry/filebeat' succeeded. Active transaction id=0
2020-10-22T06:27:21.317Z        INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2020-10-22T06:27:21.332Z        INFO    memlog/store.go:124     Finished loading transaction log file for '/var/lib/filebeat/registry/filebeat'. Active transaction id=1527
2020-10-22T06:27:21.332Z        WARN    beater/filebeat.go:381  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-10-22T06:27:21.332Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 10
2020-10-22T06:27:21.332Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2020-10-22T06:27:21.334Z        INFO    log/input.go:157        Configured paths: [/var/log/*.log]
2020-10-22T06:27:21.334Z        INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 11204088409762598069)
2020-10-22T06:27:21.336Z        INFO    log/harvester.go:299    Harvester started for file: /var/log/cloud-init-output.log

.

2020-10-22T06:27:51.479Z        ERROR   [logstash]      logstash/async.go:280   Failed to publish events caused by: read tcp 10.0.35.4:38002->10.0.35.5:5044: i/o timeout
2020-10-22T06:27:51.480Z        DEBUG   [logstash]      logstash/async.go:172   2048 events out of 2048 events sent to logstash host 10.0.35.5:5044. Continue sending
2020-10-22T06:27:51.480Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2020-10-22T06:27:51.480Z        INFO    [publisher]     pipeline/retry.go:223     done
2020-10-22T06:27:51.480Z        DEBUG   [logstash]      logstash/async.go:128   close connection
2020-10-22T06:27:51.480Z        ERROR   [logstash]      logstash/async.go:280   Failed to publish events caused by: write tcp 10.0.35.4:38002->10.0.35.5:5044: use of closed network connection
2020-10-22T06:27:51.480Z        DEBUG   [logstash]      logstash/async.go:128   close connection
2020-10-22T06:27:51.480Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2020-10-22T06:27:51.480Z        INFO    [publisher]     pipeline/retry.go:223     done
2020-10-22T06:27:53.481Z        ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: write tcp 10.0.35.4:38002->10.0.35.5:5044: use of closed network connection
2020-10-22T06:27:53.481Z        INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(async(tcp://10.0.35.5:5044))
2020-10-22T06:27:53.481Z        DEBUG   [logstash]      logstash/async.go:120   connect
2020-10-22T06:27:53.481Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2020-10-22T06:27:53.481Z        INFO    [publisher]     pipeline/retry.go:223     done
2020-10-22T06:27:53.483Z        INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(async(tcp://10.0.35.5:5044)) established
2020-10-22T06:27:53.525Z        DEBUG   [logstash]      logstash/async.go:172   2048 events out of 2048 events sent to logstash host 10.0.35.5:5044. Continue sending

Any guidance here? I'm banging my head right now..

[Edit - Added text of cmdline output as that may be easier to see..]

And just in case, here's some logstash data after running sudo ./bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/beats.conf --config.reload.automatic :

[2020-10-22T06:47:26,180][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://0.0.0.0:9200/]}}
[2020-10-22T06:47:26,470][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://0.0.0.0:9200/"}
[2020-10-22T06:47:26,537][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-10-22T06:47:26,543][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-10-22T06:47:26,631][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//0.0.0.0:9200"]}
[2020-10-22T06:47:26,724][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2020-10-22T06:47:26,817][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/beats.conf"], :thread=>"#<Thread:0x60feee16 run>"}
[2020-10-22T06:47:26,863][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-10-22T06:47:28,142][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.31}
[2020-10-22T06:47:28,258][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-10-22T06:47:28,311][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-10-22T06:47:28,504][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-10-22T06:47:28,667][INFO ][org.logstash.beats.Server][main][fafbc78e9db27f6a29354f4105e4e92ad72510e14769fe64be7c46b26a83cf0d] Starting server on port: 5044
[2020-10-22T06:47:28,976][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Could you please share your Filebeat and Logstash configurations formatted using </>?

Absolutely!

Filebeat:

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.0.35.5:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
    indent preformatted text by 4 spaces

Logstash on my central elk server (/etc/logstash/conf.d/beats.conf):

input{
    beats{
        port => "5044"
    }
}

output{
    elasticsearch {
        hosts => ["0.0.0.0:9200"]
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
    }
}

Might be helpful, idk. After running sudo systemctl status filebeat:

   Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2020-10-22 07:05:23 UTC; 8h ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 4073 (filebeat)
    Tasks: 8 (limit: 4915)
   CGroup: /system.slice/filebeat.service
           └─4073 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebea

Oct 22 15:17:06 choco-server filebeat[4073]: 2020-10-22T15:17:06.163Z        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Oct 22 15:17:06 choco-server filebeat[4073]: 2020-10-22T15:17:06.163Z        INFO        [publisher]        pipeline/retry.go:223          done
Oct 22 15:17:07 choco-server filebeat[4073]: 2020-10-22T15:17:07.790Z        ERROR        [publisher_pipeline_output]        pipeline/output.go:180        failed to publish events: write tcp 
Oct 22 15:17:07 choco-server filebeat[4073]: 2020-10-22T15:17:07.790Z        INFO        [publisher_pipeline_output]        pipeline/output.go:143        Connecting to backoff(async(tcp://10.
Oct 22 15:17:07 choco-server filebeat[4073]: 2020-10-22T15:17:07.790Z        INFO        [publisher]        pipeline/retry.go:219        retryer: send unwait signal to consumer
Oct 22 15:17:07 choco-server filebeat[4073]: 2020-10-22T15:17:07.790Z        INFO        [publisher]        pipeline/retry.go:223          done
Oct 22 15:17:07 choco-server filebeat[4073]: 2020-10-22T15:17:07.791Z        INFO        [publisher_pipeline_output]        pipeline/output.go:151        Connection to backoff(async(tcp://10.
Oct 22 15:17:23 choco-server filebeat[4073]: 2020-10-22T15:17:23.778Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"met
Oct 22 15:17:53 choco-server filebeat[4073]: 2020-10-22T15:17:53.778Z        INFO        [monitoring]        log/log.go:145        Non-zero metrics in the last 30s        {"monitoring": {"met

Hi, did you resolve your problem? I've just hit the same problem. Would appreciate your solution. Thanks.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.