Filebeat ERR - failed to publish events EOF

I'm concerned about dataloss, because i get the following error quite often:

2017-02-15T23:58:50Z ERR Failed to publish events caused by: EOF

Typically the files that I'm pushing to Logstash only get written to for 5 minutes, then are moved to S3 after 30 minutes to an hour. I'm not sure if it's a configuration issue, or just noise.

Any suggestions for my config would be much appreciated.

I'm running these as SysV services on AWS Linux:
Filebeat - 5.2.1
Logstash - 2.4.1 (Need the Kinesis Output, which is why I'm still on this version)

Filebeat Config:

filebeat:
  prospectors:
    -
      input_type: log
      paths:
        - /var/log/hadoop-yarn/containers/application_*/container_*/stdout
      encoding: plain
      document_type: yarn-container-stdout-logs
      close_removed: true
      close_inactive: 10m
#      clean_*: true
    -
      input_type: log
      paths:
        - /var/log/hadoop-yarn/containers/application_*/container_*/stderr
      encoding: plain
      document_type: yarn-container-stderr-logs
      close_removed: true
      close_inactive: 10m
#      clean_*: true
      multiline:
        pattern: '^[[:space:]]+at|^org\.|^Caused by:'
        negate: false
        match: after

output:
  logstash:
    hosts: ["${EMR_MSTR_IP_ADDR}:5044"]

Logstash Input Config

beats {
    port => 5044
    include_codec_tag => false
}

Also for the life of me I can't get either to pick up Env Variables without just using Sed.

Thanks...rg

which logsthas-input-beats plugin version have you installed? EOF (end of file) happens if connection is closed by remote host (logstash host). Have you checked logstash logs? Upon EOF, filebeat reconnects and continues sending. As long as you still get your logs in time, it's not too critical.

After I posted I actually updated

Updated logstash-input-beats 3.1.8 to 3.1.12

Don't see any errors in either Logstash or Filebeat now, but i'm losing a few records.

Only reasons you can loose records is due to writing logs much faster then logstash/filebeat can consumer logs and deleting not unprocessed files or logstash being killed with events in pipeline already ACKed to filebeat. Better open another topic.

They're getting dropped in the kinesis stream. It looks that way anyway.

As a small sample i have 150 records making it through logstash to Local FS, but only 90 making it's way to Kibana.

Considering dropping the rate limit, if that makes sense.

        kinesis {
        stream_name => "${KINESIS_STREAM:DEFAULT_KINESIS_STREAM}"
        region => "xx"
        randomized_partition_key => true
        aggregation_enabled => false
        max_pending_records => 10000
        rate_limit => 80
    }

    if ("application_master" in [tags]) and ([type] == "yarn-container-stdout-logs") {
        file {
            path => "/tmp/app-mstr-logs.log"
            flush_interval => 0
        }
    }

In case events are dropped, you should find out if they are dropped between FB and LS or LS and Kinesis. If it is to Kinesis, probably best open a question the LS forum.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.