Failed to publish events: temporary bulk send failure

I get the following error
ERROR pipeline/output.go:92 c

when trying to add the following to the packetbeat.yml
processors:

  • include_fields.fields: ["http.request.body"]

connection to elasticsearch is ok
2018-06-03T23:10:37.835+0530 DEBUG [elasticsearch] elasticsearch/client.go:689 Ping status code: 200

Could you share the full error you get or is that it? Please include the lines before and after. Could you also share your full config file and the version of packetbeat you use?

Root Cause as i found:
when following is added to elasticsearch output in packetbeat.yml
index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd.HH}"

Environment:
elasticsearch version - 6.2.4 packetbeat version - 6.2.4

Full Log:

2018-06-04T00:37:40.893+0530    ERROR   pipeline/output.go:92   Failed to publish events: temporary bulk send failure
2018-06-04T00:37:40.893+0530    DEBUG   [elasticsearch] elasticsearch/client.go:666 ES Ping(url=http://localhost:9200)
2018-06-04T00:37:40.894+0530    DEBUG   [elasticsearch] elasticsearch/client.go:689 Ping status code: 200
2018-06-04T00:37:40.894+0530    INFO    elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
2018-06-04T00:37:40.894+0530    DEBUG   [elasticsearch] elasticsearch/client.go:708 HEAD http://localhost:9200/_template/packetbeat-6.2.4  <nil>
2018-06-04T00:37:40.895+0530    INFO    template/load.go:73 Template already exists and will not be overwritten.
2018-06-04T00:37:40.896+0530    DEBUG   [elasticsearch] elasticsearch/client.go:303 PublishEvents: 1 events have been  published to elasticsearch in 1.245631ms.
2018-06-04T00:37:40.896+0530    DEBUG   [elasticsearch] elasticsearch/client.go:507 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}

Config:

#============================== Network device ================================
# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: any

#========================== Transaction protocols =============================
packetbeat.protocols:
- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 5000, 8002]
  send_all_headers: true	
  send_response: true
  send_request: true
  include_body_for: ["application/json"]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  ports: [443]

#==================== Elasticsearch template setting ==========================
# Set to false to disable template loading.
setup.template.enabled: true
# Overwrite existing template
setup.template.overwrite: true

setup.template.name: "packetbeat-%{[beat.version]}"
setup.template.pattern: "packetbeat-%{[beat.version]}-*"

setup.template.fields: "fields.yml"

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ Processors ===================================

processors:
  #- include_fields.fields: ["http.request.body"]
  - include_fields.fields: ["http.request.body","http.response.body","direction","http.request.headers.api_id","http.request.headers.api_name",
    "http.request.headers.api_publisher","http.request.headers.application_id","http.request.headers.application_name","http.request.headers.request_id",
    "http.request.headers.resource","http.request.headers.user_id","http.request.headers.version","http.response.code","http.response.phrase","ip"," path",
    "responsetime","status",'bytes_out','bytes_in','client_ip']

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"


#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd.HH}"

I think the problem is that you remove the beat.version from the event with your include_fields processor so the index is invalid. Add beat.version to your include list and try again.

hello,
i have a similar problem but with filebeat.

2018-06-12T11:04:48.747Z ERROR   pipeline/output.go:92   Failed to publish events: temporary bulk send failure
2018-06-12T11:04:48.749Z INFO    elasticsearch/client.go:690     Connected to Elasticsearch version 6.2.4
2018-06-12T11:04:48.751Z INFO    template/load.go:73     Template already exists and will not be overwritten.

the config:

filebeat.autodiscover:
  providers:
    - type: docker
      templates:
        - condition:
            or:
              - contains:
                  docker.container.image: tomcat
              - equals:
                  docker.container.labels.com.docker.compose.service: tomcat
              - contains:
                  docker.container.name: tomcat
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2}|(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) \d\d)'
              multiline.negate: true
              multiline.match: after
              fields:
                autodiscover: 'image-tomcat'
              pipeline: tomcat_level
        - condition:
            and:
              - not:
                contains:
                  docker.container.image: tomcat
              - not:
                equals:
                  docker.container.labels.com.docker.compose.service: tomcat
              - not:
                contains:
                  docker.container.name: tomcat
          config:
            - type: docker
              containers.ids:
                - "${data.docker.container.id}"
              fields:
                autodiscover: 'default'

fields:
  env: 'dev'

setup.kibana:
  host: "https://kibana.local:443"


output.elasticsearch:
  hosts: ["localhost:9200"]
  worker: 8
  index: dev-%{[beat.version]}-%{+yyyy.MM.dd}

setup.template.name: "dev"
setup.template.pattern: "dev-*"
setup.dashboards.index: "dev-*"

any hints whats wrong here?

Hi @setiseta
I am pretty sure this is due using a custom index name. try removing custom index name
Remove
index: dev-%{[beat.version]}-%{+yyyy.MM.dd}

No change with the index name to default.
But i think i've found it.
I've change now from:
DOCKER_OPTS="--log-opt max-size=100m --log-opt max-file=5"
to:
DOCKER_OPTS="--log-opt max-size=500m"

for now it is gone.
it's a setting to protect the server of disk space problems due to docker logging.
(default is unlimited logging size, and with the file num it rolls files)
maybe there is a bug / unexpected behavior of filebeat if logging options are set to create some files.

maybe only with autodiscovery which is experimental.

the errors occoures again after some time.
any more hints?
is it the log file size maybe?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.