Is it possible to let elasticsearch choose the index to ingest

we want to distribute / split out data to different indexes, because based on the type of the data we will have different retention times.

In my last project I used logstash to parse and enrich log events and calculated in logstash in which index the documents are stored:

    index => "%{[@metadata][indexName]}"

I am not familar with the ingest pipelines which are offered by elasticsearch.
Is it possible to implement some logic like this in the ingress pipeline?

Assume logName is a field which is already present in the input documents.

if (logName in (httpd, session, xyz))
  store in index_a
else if (logName in (abc, efg)
  store in index b
  store in index c

Index a, b, c will have different ILM rules.

Is it possible to do this in elasticsearch? How?
I'd like the centralized way to have the config in one central point. But beside my liking, what is best practice if you have multiple logstash instances?

PS: We are only using the free version. No payed subscription yet.

Thanks, Andreas

Sure is. It's done with Pipeline processor | Elasticsearch Guide [8.3] | Elastic and having one processor per index.

how do I set the index / data stream to write to in a pipeline rule? Could you please provide an example?

Ingest pipelines | Elasticsearch Guide [8.4] | Elastic is the best reference, it's similar to what you have above.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.