I've got a machine with many Apache servers on it.
The Apache logs got into different directories,
/srv/host1.com/logs
/srv/host2.com/logs
etc
If I enable modules this will use the default paths, which obviously wont find these files.
I can update the var.paths for both access and errors to include each hosts' log files.
But I was contemplating adding tags, like host1, so these files are grouped together.
A second option could be to use add_process_metadata but I have to admit that I'm not fully sure if it will solve your issue. I haven't tried to use it that way.
A third option could be to use an Ingest node of Elasticsearch with a Pipeline https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html defined to manipulate the incoming string of log data and infer the process that it comes from. You don't need a dedicated ingest node althought it's recommended.
I've ready all the docs, but I don't understand what you mean.
Are you suggesting I run one instance of Filebeat with all the configurations in it, or one Filebeat per Apache instance each with their own configuration files?
If I have multiple Filebeats then I wont be able to use simple commands like sudo service filebeat start, I'll need to create a service for each customization... or is there another way?
Can you show me what an example setup might look like?
If you want to index the data from each log file to different indexes, then you can create different yml config files and use the different files while starting filebeats.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.