ERR Failed to publish events: temporary bulk send failure

filebeat version 6.0.1
Elasticsearch version 5.6.2
Filebeat has 40,Elasticsearch has 4.

filebeat.yml:
filebeat.registry_file: /home/work/data/registry

filebeat.config.prospectors:
  enabled: true
  path: configs/*.yml
  reload.enabled: true
  reload.period: 30s

output.elasticsearch:
  hosts: ["IP1:9200","IP2:9200","IP3:9200","IP4:9200"]

filebeat log

   2017-12-12T15:45:03+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:03+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:03+08:00 INFO Connected to Elasticsearch version 5.6.2
    2017-12-12T15:45:04+08:00 INFO Connected to Elasticsearch version 5.6.2
    2017-12-12T15:45:04+08:00 INFO Template already exists and will not be overwritten.
    2017-12-12T15:45:04+08:00 INFO Template already exists and will not be overwritten.
    2017-12-12T15:45:08+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:08+08:00 INFO Connected to Elasticsearch version 5.6.2
    2017-12-12T15:45:08+08:00 INFO Template already exists and will not be overwritten.
    2017-12-12T15:45:09+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:09+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:09+08:00 INFO Connected to Elasticsearch version 5.6.2
    2017-12-12T15:45:09+08:00 INFO Template already exists and will not be overwritten.
    2017-12-12T15:45:09+08:00 ERR  Failed to publish events: temporary bulk send failure
    2017-12-12T15:45:09+08:00 INFO Connected to Elasticsearch version 5.6.2
    2017-12-12T15:45:09+08:00 INFO Template already exists and will not be overwritten.
    2017-12-12T15:45:12+08:00 INFO Connected to Elasticsearch version 5.6.2

There is no ERR when I reduce filebeat to 20。

I have modified filebeat.yml:

filebeat.registry_file: /home/work/data/registry
queue.mem:
  events: 1000000
  flush.timeout: 30s
filebeat.config.prospectors:
  enabled: true
  path: configs/*.yml
  reload.enabled: true
  reload.period: 30s

output.elasticsearch:
  hosts: ["IP1:9200","IP2:9200","IP3:9200","IP4:9200"]
  bulk_max_size: 100000

But it doesn't work.

Anybody encountered this problem?

Why would you set it this large? Larger size does not necessarily equal improved performance, but will use up more memory. Quoting the docs:

Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.

I want to reduce connection by increase maximum number of events in a single Elasticsearch bulk API index request. It's just an attempt.

ERR Failed to publish events: temporary bulk send failure
Connected to Elasticsearch version 5.6.2

What causes this situation?

thank you.

I do not know. Is there anything in the logs? Have you tried with a smaller bulk size, e.g. 1000 documents?

Yes, I have tried with a smaller bulk size, the other logs no abnormalities.
Perhaps because bulk is full. Elasticsearch was disconnected.

How much data are you receiving each day? Might it be worthwhile reducing the number of primary shards to reduce the risk of bulk rejections?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.