I am new to ELK stack.
We are designing a system wherein we will be checking the performance of our product via product logs on a weekly basis and draw some visualizations on Kibana dashboard.
For this reason, we are expecting an index to be created per week so that we can analyze the results for past runs. This seems a little straight forward.
However, the twist comes in when I need to trigger the performance run of our product, ON DEMAND, which will generate its own logs and subsequently expecting those to be loaded in a separate index. What if Filebeat and Logstash are not yet done with the processing of previous run logs.
I believe, it will load the new data to the existing index.
To solve this, the approach I am thinking of is to spawn multiple instances of Logstash and Filebeat with their own configurations such as different path.data, different index names and so on. This way, I will have different indexes with their own data.
Can someone please suggest me a better approach to achieve this?
There are a couple of ways that you can go about this.
One way is to setup multiple inputs inside logstash for different ports. Then have 2 Filebeats running, one is the regular which runs on 5044 all the time, and then another that is on-demand on port 5045. Then you just build the proper input on your pipelines.yml inside logstash.
The other approach would be to setup a pipeline to pipeline setup. To do this you would create a special field to identify what log is being processed. For example your on-demand filebeat.yml would look something like this:
This creates a field called ondemand that we can search for in the pipelines.yml file on the logstash server. So your pipelines.yml would look something like this:
Then all you have to do is setup your input on your logstash conf files to look have this:
input {
pipeline {
address => ondemand_run
}
}
So you would create a field on each of the filebeat inputs that you can search on.
Then you would create conditional logic in your pipelines.yml to determine which config to use by created a virtual address.
Then you set the virtual address in the input of the logstash conf files. From there you can change the output to whatever index you want and send it to Elastic Search.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.