How could I use in ingest pipeline the Logstash translate filter?

I have a field with some error codes mapped as keyword values that I make some plots. It's not friendly to some people to look at these codes, so I want to change these error codes to a description that I will map as keyword type, and it would be awesome to do this translation when I index my documents with the ingest pipeline.
I can't use logstash, and I am looking for an alternative way to translate some field values.
Right now I am doing this with a scripted field directly in kibana.

Same concept of what you are doing with a scripted field except if you run that painless script in an ingest pipeline then you only do this once, when the data comes in. Scripted fields calculate every time the record is accessed.

  1. Create an ingest pipeline. Recommend researching the Script Processor first.
  2. Test ingest pipeline using Simulate Pipeline API.
  3. Define pipeline to run with how you are ingesting your data. Logstash and Beats both have a setting where you identify which pipeline to run.
1 Like

I tried to do the painless script and had a fatal exception few days ago that scared me a lot, my cluster took a few hours to recover and assign the shards. This exception seems to be a recent problem and I replied in this issue in github: https://github.com/elastic/elasticsearch/issues/66175

Now that I know that I should not use a return statement, I will try again soon. Thank you for the answer!

1 Like

Why not just add the description using the enrich processor, pretty much exactly made for this use case.

1 Like

This is a very nice way of dealing with my problem. I will try tomorrow and report back. Thank you guys so much for helping me with two distinct solutions.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.