Read one file, make two filters and send for two different indexes

I have a big log CSV file, receiving logs from different equipments.
My logstash read the lines and send to a daily index.

I need to keep the last equipment report in a different index.
Too many slow visualizations aggregate top hit to get this value, but I think that I don't need to do this.

My plan is:

import logs normally as today with logstash-*
create a second index 'last_report', where id is equipment_id.

So, every new line actually will be a update on the last_report index.

Is this possible with logstash, or did I need to think in a different way?

That is a quite common method to make sure the latest state can be retrieved efficiently. It should be fine doing that with Logstash, at least as long as you do not have documents being updated very frequently.

1 Like

Hi, thank you for your answer!

So, can I just create another pipeline to look at the same file at the same time?

You can have two elasticsearch outputs in the same pipeline.

But my problem is, one of them, I have a generated id (default), the other one, must be equipment_id, so next lines will just update old documents or insert a new one.

How frequently do you expect a single document in the new index to be updated?

At least 1 update per hour. Some times, we force a request in equipment and can happen before, but is not usual.

That is not very frequent so should not be a problem. I would recommend creating two elasticsearch output plugins, one to write the documents with auto-generated id into the existing index and one to write it using equipment_id to the new index. This will result in an insert the first time and an update every time after that.

1 Like

It works. Thank you very much!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.