I have to make a centralized log monitoring system. I have to read logs from multiple locations and parse the logs differently in logstash. All these logs need to be sent to one index to Kibana at the output of logstash.
I tried using different yml files but when I execute the 2nd yml it gives an error that Filebeat in C:\Program Files\Filebeat\logs is used by some other process.
Hi @pradeep_gadkari, welcome to discuss
Glad to see that you are using filebeat as the foundation of your monitoring system
To run multiple instances of the same Beat in the same machine, you need to ensure that they use different data directories. Data directory is configured with the --path.data
flag, or with the path.data
option in the main configuration file. You can read more about paths configuration here: Configure project paths | Filebeat Reference [7.12] | Elastic
I also have a couple of recommendations about your scenario.
If you are using Logstash only for parsing, consider using filebeat processors and/or ingest pipelines instead. This will simplify your infrastructure and your index management.
Regarding indexes, if Filebeat is configured to use Elasticsearch directly as its output, it will create its own indexes, that are usually fine for most logging use cases. Indexes created by Filebeat include the version and a timestamp in their names, this allows to use different indexes for different versions of Filebeat, and to create new indexes when they grow to certain limits.
Even if you decide to don't use these default indexes, still consider a similar indexing strategy for your custom indexes.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.