I am new to ELK stack.
We are designing a system wherein we will be checking the performance of our product via product logs on a weekly basis and draw some visualizations on Kibana dashboard.
For this reason, we are expecting an index to be created per week so that we can analyze the results for past runs. This seems a little straight forward.
However, the twist comes in when I need to trigger the performance run of our product, ON DEMAND, which will generate its own logs and subsequently expecting those to be loaded in a separate index. What if Filebeat and Logstash are not yet done with the processing of previous run logs.
I believe, it will load the new data to the existing index.
To solve this, the approach I am thinking of is to spawn multiple instances of Logstash and Filebeat with their own configurations such as different path.data, different index names and so on. This way, I will have different indexes with their own data.
Can someone please suggest me a better approach to achieve this?