Filebeat 7.14.1 is stuck in a log loop with failed to publish events: temporary bulk send failure
.
Turning debug log, it shows:
filebeat | 2021-11-07T15:57:49.938Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
filebeat | 2021-11-07T15:57:49.941Z DEBUG [elasticsearch] elasticsearch/client.go:227 PublishEvents: 1 events have been published to elasticsearch in 3.1342ms.
filebeat | 2021-11-07T15:57:49.941Z DEBUG [elasticsearch] elasticsearch/client.go:411 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
filebeat | 2021-11-07T15:57:49.941Z INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
filebeat | 2021-11-07T15:57:49.941Z INFO [publisher] pipeline/retry.go:223 done
filebeat | 2021-11-07T15:57:51.776Z ERROR [publisher_pipeline_output] pipeline/output.go:180 failed to publish events: temporary bulk send failure
This error is so criptic with missing information that I don't know how to get more information...
What String caused error, doing what???
I've searched Elasticsearch logs for string_index_out_of_bounds_exception
but didn't found anything.
My filebeat.yml:
output.elasticsearch:
hosts: ["http://elasticsearch:9200"]
index: "log-%{[docker][container][labels][app]}"
pipeline: pipeline_logs
ilm.enabled: false
logging.level: debug
queue.mem:
events: 1024
flush.min_events: 256
flush.timeout: 1s
setup.ilm.enabled: false
setup.template.enabled: false
setup.kibana.host: http://kibana:5601
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
templates:
- condition:
contains:
docker.container.image: swag
config:
- module: nginx
access:
enabled: true
var.paths: [ "/config/log/nginx/access.log*" ]
error:
enabled: true
var.paths: [ "/config/log/nginx/error.log*" ]
My pipeline_logs has an on_failure
fallback processor, but it seems the problem is somewhere else:
{
"description": "Log dispatcher ingest pipeline",
"processors": [
{
"set": {
"field": "app_name",
"value": "{{docker.container.labels.app}}",
"ignore_empty_value" : true
}
},
{
"pipeline": {
"if": "ctx.app_name == 'autoheal'",
"name": "pipeline_autoheal"
}
},
{
"pipeline": {
"if": "ctx.app_name == 'web'",
"name": "pipeline_web"
}
},
{
"pipeline": {
"if": "ctx.app_name == null",
"name": "pipeline_fallback"
}
},
{
"uppercase": {
"field": "log.level",
"if": "ctx.log?.level != null"
}
},
{
"set": {
"field": "level",
"value": "{{log.level}}",
"if": "ctx.log?.level != null"
}
}
],
"on_failure": [
{
"set": {
"description": "Record error information",
"field": "error",
"value": "Processor '{{ _ingest.on_failure_processor_type }}' with tag '{{ _ingest.on_failure_processor_tag }}' in pipeline '{{ _ingest.on_failure_pipeline }}' failed with message: {{ _ingest.on_failure_message }}"
}
}
]
}
I've done some pipeline tests in Kibana and it seems to be working fine.