So the same index that has worked for over a year now seems to have stopped when we moved to Elasticsearch v8.
Custom index for filebeat.
filebeat.inputs:
- type: log
paths:
- LogPathGoeshere:Expired.csv
exclude_lines: ['^"?samaccountname"?,"?PasswordLastSet"?,"?DaysUntilExpired"?']
output.elasticsearch:
hosts: ["https://servername:9200"]
username: "changeme"
password: "changeme"
index: "expiringpasswords-%{[beat.version]}-%{+yyyy.MM.dd}"
pipeline: "expiringpasswords-pipeline"
setup:
template.enabled: false
ilm.enabled: false
setup.template.name: "expiringpasswords-%{[beat.version]}-%{+yyyy.MM.dd}"
setup.template.pattern: "expiringpasswords-%{[beat.version]}-%{+yyyy.MM.dd}"
{"log.level":"debug","@timestamp":"2022-04-19T10:04:34.455-0700","log.logger":"Elasticsearch","log.origin":{"file.name":"Elasticsearch/client.go","file.line":434},"message":"Bulk item insert failed (i=10, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}","service.name":"filebeat","ecs.version":"1.6.0"}
The problem is the index is no longer being created with the beats version or date. It was working for the past year without a problem and was doing fine. Now the only way it will run is if I remove "-%{[beat.version]}-%{+yyyy.MM.dd}" but then the ILM won't work as expected and I end up with duplicate data for a few hours. Not really ideal as it makes the table really messy and unreadable.
The initial index and pipeline was setup with using the ML import if that helps at all.
Any ideas?