ERROR pipeline/output.go:121 Failed to publish events: temporary bulk send failure

I am getting the error whenever I try ingesting data into elasticsearch using filebeat.

2019-02-21T07:57:02.190Z        INFO    elasticsearch/client.go:721     Connected to Elasticsearch version 6.5.4
2019-02-21T07:57:02.193Z        INFO    template/load.go:130    Template already exists and will not be overwritten.
2019-02-21T07:57:02.193Z        INFO    instance/beat.go:894    Template successfully loaded.
2019-02-21T07:57:02.193Z        INFO    pipeline/output.go:105  Connection to backoff(elasticsearch(http://localhost:9200)) established
2019-02-21T07:57:04.513Z        ERROR   pipeline/output.go:121  Failed to publish events: temporary bulk send failure
2019-02-21T07:57:04.513Z        INFO    pipeline/output.go:95   Connecting to backoff(elasticsearch(http://localhost:9200))
2019-02-21T07:57:04.515Z        INFO    elasticsearch/client.go:721     Connected to Elasticsearch version 6.5.4
2019-02-21T07:57:04.519Z        INFO    template/load.go:130    Template already exists and will not be overwritten.

This is the pipeline.json used for defining pipeline:

{
  "description": "Pipeline for ingest node",
  "processors": [
    {
      "grok": {
        "field": "message",
        "patterns": [
          "%{IP:source_ip} %{GREEDYDATA} \\[%{HTTPDATE:request_date}\\] \\\"%{WORD:http_method} %{URIPROTO:http_proto}://%{URIHOST:uri_host}%{URIPATH:uri_path}%{GREEDYDATA:uri_query} http/%{NUMBER:http_version}\\\" %{NUMBER:response_code} %{NUMBER:bytes_sent:int} %{NUMBER:origin_response_code} %{NUMBER:origin_bytes_sent} %{NUMBER:client_req_content_length} %{NUMBER:proxy_req_length} %{NUMBER:client_req_header_length} %{NUMBER:proxy_resp_header_length} %{NUMBER:proxy_req_header_length} %{NUMBER:origin_header_resp_length} %{NUMBER:time_to_serve:} %{NUMBER:origin_time_to_serve:} %{WORD:proxy_hierarchy_route} %{WORD:finish_status_client} %{WORD:finish_status_origin} %{WORD:cache_result_code} \\\"%{GREEDYDATA:user_agent}\\\" %{GREEDYDATA:x_play_back_session_id}",
          "%{IP:source_ip} %{GREEDYDATA} \\[%{HTTPDATE:request_date}\\] \\\"%{WORD:http_method} %{URIPROTO:http_proto}://%{URIHOST:uri_host}%{URIPATH:uri_path}%{GREEDYDATA:uri_query} http/%{NUMBER:http_version}\\\" %{NUMBER:response_code} %{NUMBER:bytes_sent:int} %{NUMBER:origin_response_code} %{NUMBER:origin_bytes_sent:int} %{NUMBER:client_req_content_length} %{NUMBER:proxy_req_length} %{NUMBER:client_req_header_length} %{NUMBER:proxy_resp_header_length} %{NUMBER:proxy_req_header_length} %{NUMBER:origin_header_resp_length} %{NUMBER:time_to_serve:} %{NUMBER:origin_time_to_serve:} %{WORD:proxy_hierarchy_route} %{WORD:finish_status_client} %{WORD:finish_status_origin} %{WORD:cache_result_code} %{GREEDYDATA:user_agent}"
        ],
        "on_failure": [
                    {
                        "grok": {
                            "field": "message",
                            "patterns": ["%{IP:source_ip} %{GREEDYDATA} \\[%{HTTPDATE:request_date}\\] \\\"%{WORD:http_method} %{URIPROTO:http_proto}://%{URIHOST:uri_host}%{URIPATH:uri_path}%{GREEDYDATA:uri_query} http/%{NUMBER:http_version}\\\" %{NUMBER:response_code} %{NUMBER:bytes_sent} %{NUMBER:origin_response_code} %{NUMBER:origin_bytes_sent} %{NUMBER:client_req_content_length} %{NUMBER:proxy_req_length} %{NUMBER:client_req_header_length} %{NUMBER:proxy_resp_header_length} %{NUMBER:proxy_req_header_length} %{NUMBER:origin_header_resp_length} %{NUMBER:time_to_serve:} %{NUMBER:origin_time_to_serve:} %{WORD:proxy_hierarchy_route} %{WORD:finish_status_client} %{WORD:finish_status_origin} %{WORD:cache_result_code} %{GREEDYDATA:user_agent}"]
                        }
                    }
                ]
      }
    },
    {
      "convert": {
        "field": "bytes_sent",
        "type": "integer"
      }
    },
{
      "dissect": {
        "field": "uri_path",
        "if": "(ctx.uri_path.contains(\"hls5\") && ctx.uri_path.contains(\"live\") && (ctx.uri_path.contains(\"m3u8\") || ctx.uri_path.contains(\"ts\"))) || (ctx.uri_path.contains(\"dash\") && ctx.uri_path.contains(\"live\") && ctx.uri_path.contains(\"m4s\"))",
        "pattern": "/%{a}/%{protocol}/%{stream_type}/%{backend_channel_id}/%{e}/%{variant}/%{g}.%{h}"
      }
    },
    {
      "remove": {
        "field": [
          "a",
          "e",
          "g",
          "h"
        ]
      }
    }

  ]
}

This is the curl command using pipeline.json for PUT PIPELINE API:

curl -H 'Content-Type: application/json' -X PUT 'localhost:9200/_ingest/pipeline/test-pipeline' -d@pipeline.json

This is the filebeat.yml file.

filebeat.inputs:
- type: log
  enabled: true
  paths:
        - /var/dump/log/*
  exclude_lines:
        - thumbnail
        - pictures
        - health
        - stats
        - alerts
        - url_template
        - resource
        - config
  include_lines:
      - live
      - vod
      - data
output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "test-pipeline-%{+yyyy.MM.dd}"
  pipeline: "test-pipeline"
setup.template.name: "test-pipeline"
setup.template.pattern: "test-pipeline*"
setup:
  kibana:
    host: "localhost:5601"
  dashboards:
    index: "test-pipeline*"

You have named your pipeline ats-pipeline but configured test-pipeline in the Filebeat config.

My mistake. Was a typo. Edited the topic to the correct value. The error still gets generated about the bulk send failure.

Any suggestions??

I do not see anything else obviously wrong. Can you increase the Filebeat logging level? Is there anything in the Elasticsearch logs? Have you tested your pipeline for different types of records using the simulate API?

Using the simulate API, i have tested both correct and erroneous logs which are supposed to be ingested. They're giving the correct results.

Elasticsearch only shows this :

[2019-02-22T09:21:49,518][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] [inkba01p1] Deprecated field [template] used, replaced by [index_patterns]
[2019-02-22T09:21:50,533][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] [inkba01p1] Deprecated field [template] used, replaced by [index_patterns]
[2019-02-22T09:21:51,545][WARN ][o.e.d.a.a.i.t.p.PutIndexTemplateRequest] [inkba01p1] Deprecated field [template] used, replaced by [index_patterns]

i haven't set logging.level yet. will check.

It seems that the template you are trying to apply is not in the correct format. Was this by any chance created for an older version of Elasticsearch?

This is how I have set the templates:

setup.template.name: "test-pipeline"
setup.template.pattern: "test-pipeline*"

I am talking about the format of the template itself, not how you specify it in the Filebeat config.

Yeah. Saw it. It was created for an older version of Elasticsearch. Changed it to the latest one. Still nothing.

Elasticsearch logs show:

[2019-02-22T15:58:31,384][INFO ][o.e.c.m.MetaDataIndexTemplateService] [SHARADA-LAPTOP] adding template [kibana_index_template:.kibana] for index patterns [.kibana]

Do you have any non-default settings in your Elasticsearch configuration?

Nope. All are the default ones.

Did you get any additional information from increasing the log level to DEBUG?

Nope. I am receiving this.

2019-02-22T17:51:18.121+0530    DEBUG   [input] log/input.go:174        Start next scan
2019-02-22T17:51:18.124+0530    DEBUG   [input] log/input.go:404        Check file for harvesting: /var/dump/log/ats1.log
2019-02-22T17:51:18.125+0530    DEBUG   [input] log/input.go:494        Update existing file for harvesting: /var/dump/log/ats1.log, offset: 439830
2019-02-22T17:51:18.125+0530    DEBUG   [input] log/input.go:546        Harvester for file is still running: /var/dump/log/ats1.log
2019-02-22T17:51:18.125+0530    DEBUG   [input] log/input.go:195        input states cleaned up. Before: 1, After: 1, Pending: 0
2019-02-22T17:51:23.087+0530    DEBUG   [harvester]     log/log.go:102  End of file reached: /var/dump/log/ats1.log; Backoff now.
2019-02-22T17:51:28.127+0530    DEBUG   [input] input/input.go:152      Run input
2019-02-22T17:51:28.127+0530    DEBUG   [input] log/input.go:174        Start next scan
2019-02-22T17:51:28.131+0530    DEBUG   [input] log/input.go:404        Check file for harvesting: /var/dump/log/ats1.log
2019-02-22T17:51:28.131+0530    DEBUG   [input] log/input.go:494        Update existing file for harvesting: /var/dump/log/ats1.log, offset: 439830
2019-02-22T17:51:28.132+0530    DEBUG   [input] log/input.go:546        Harvester for file is still running: /var/dump/log/ats1.log
2019-02-22T17:51:28.133+0530    DEBUG   [input] log/input.go:195        input states cleaned up. Before: 1, After: 1, Pending: 0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.