Looks There is an issue...
There is a workaround but it is not pretty.
New validation was added that validates that the value of
aws.firehose.parameters.es_datastream_name
Is a valid datastream naming convention if it is not (which yours most likely is not) it will fall back to logs-awsfirehose-default
Did you check to see if your data is in a datastream named
Data streams must follow datastream naming convention see here
You can add the following pipeline as a temporary workaround.
You have hardcode the value of the destination
to the value you want
For some reason destination
requires a static value so it can not use mustache syntax, which would make this much cleaner.
"aws.firehose.parameters.es_datastream_name": "my-non-compliant-ds-name"
so set destination
to the hard coded value
"destination": "my-non-compliant-ds-name",
Do not set
"destination": "{{aws.firehose.parameters.es_datastream_name}}",
Example and I tested this and it works.
PUT _ingest/pipeline/logs-awsfirehose@custom
{
"processors": [
{
"set": {
"field": "pipeline_custom_name",
"value": "logs-awsfirehose@custom",
"override": false
}
},
{
"reroute": {
"destination": "my-non-compliant-ds-name", <<< Where this is the hardcoded value in aws.firehose.parameters.es_datastream_name
"ignore_failure": true
}
}
]
}
I have tested this and it works and should allow customers to continue to recieve logs in the destination the need to while they work on a migration plan