Ingest pipeline test: Error publishing events (retrying): temporary bulk send failure

I'm working on testing an ingest node pipeline for a log file. The first test ran great. I had to make a change to the pipeline and since I cannot get any data in:

2018-06-18T16:49:47-04:00 INFO Metrics logging every 30s
2018-06-18T16:49:47-04:00 INFO Setup Beat: filebeat; Version: 5.5.0
2018-06-18T16:49:47-04:00 INFO Elasticsearch url: http://localhost:9200
2018-06-18T16:49:47-04:00 INFO Activated elasticsearch as output plugin.
2018-06-18T16:49:47-04:00 INFO Publisher name: NCCDTL03NB880U
2018-06-18T16:49:47-04:00 INFO Flush Interval set to: 1s
2018-06-18T16:49:47-04:00 INFO Max Bulk Size set to: 50
2018-06-18T16:49:47-04:00 INFO filebeat start running.
2018-06-18T16:49:47-04:00 INFO No registry file found under: C:\ELASTIC\filebeat-5.5.0\data\registry. Creating a new registry file.
2018-06-18T16:49:47-04:00 INFO Loading registrar data from C:\ELASTIC\filebeat-5.5.0\data\registry
2018-06-18T16:49:47-04:00 INFO States Loaded from registrar: 0
2018-06-18T16:49:47-04:00 INFO Loading Prospectors: 1
2018-06-18T16:49:47-04:00 INFO Prospector with previous states loaded: 0
2018-06-18T16:49:47-04:00 WARN DEPRECATED: document_type is deprecated. Use fields instead.
2018-06-18T16:49:47-04:00 INFO Starting prospector of type: log; id: 1856960716938610932
2018-06-18T16:49:47-04:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018-06-18T16:49:47-04:00 INFO Starting Registrar
2018-06-18T16:49:47-04:00 INFO Start sending events to output
2018-06-18T16:49:47-04:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018-06-18T16:49:47-04:00 INFO Harvester started for file: C:\Users\nb880v\Documents\inputs\FMn805a20180618
2018-06-18T16:49:52-04:00 INFO Connected to Elasticsearch version 5.5.0
2018-06-18T16:49:53-04:00 INFO Error publishing events (retrying): temporary bulk send failure
2018-06-18T16:49:54-04:00 INFO Connected to Elasticsearch version 5.5.0
2018-06-18T16:49:54-04:00 INFO Error publishing events (retrying): temporary bulk send failure
2018-06-18T16:49:56-04:00 INFO Connected to Elasticsearch version 5.5.0
2018-06-18T16:49:56-04:00 INFO Error publishing events (retrying): temporary bulk send failure
2018-06-18T16:50:00-04:00 INFO Connected to Elasticsearch version 5.5.0
2018-06-18T16:50:01-04:00 INFO Error publishing events (retrying): temporary bulk send failure
2018-06-18T16:50:09-04:00 INFO Connected to Elasticsearch version 5.5.0
2018-06-18T16:50:09-04:00 INFO Error publishing events (retrying): temporary bulk send failure
2018-06-18T16:50:17-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.es.call_count.PublishEvents=5 libbeat.es.publish.read_bytes=8954 libbeat.es.publish.write_bytes=222935 libbeat.es.published_but_not_acked_events=250 libbeat.publisher.published_events=1299 registrar.writes=1

The filebeat yml:

filebeat.prospectors:

  • input_type: log
    paths:

    • C:/Users/nb880v/Documents/inputs/FMn805a20180618
      exclude_files: ['.gz$', '.trace$', '.error$', '.stats$']
      exclude_lines: ['TESTNODE']
      tags: ["ovo"]
      document_type: ovo_805a_logs_pr

    multiline.pattern: '^-----'
    multiline.negate: true
    multiline.match: after
    multiline.max_lines: 500

#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]
output.elasticsearch:
hosts: ["localhost:9200"]
index: "ovopl02"
pipeline: "ovo-pipeline02"
template.enabled: false

As the problems started when you changed your pipeline, can you show us the pipeline that you're using to see if there is any problem?

Also, run filebeat in debug mode (-d *) so we can get more information about the publish error.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.