Many Apache servers on one machine?

I've got a machine with many Apache servers on it.

The Apache logs got into different directories,

  • /srv/host1.com/logs
  • /srv/host2.com/logs
  • etc

If I enable modules this will use the default paths, which obviously wont find these files.

I can update the var.paths for both access and errors to include each hosts' log files.
But I was contemplating adding tags, like host1, so these files are grouped together.

The https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-modules.html documentation says using modules is optional and that I can configure these manually. Yet there is no documentation that tells me what the modules do so that I can apply this manually.

I can look into the source of the Apache module at https://github.com/elastic/beats/tree/master/filebeat/module/apache but I can't work out how to re-use ingest/default.json from a manual configuration.

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#_literal_pipeline_literal says I need the pipeline ID but there are no pipelines defined in my Kibana console. Perhaps because I haven't yet enable Apache logging...

What is the correct way to setup filebeats to monitor multiple Apache servers?

Hi @baerrach :slight_smile:

Your use case is a bit special and the easiest setup is to run a single Filebeat with each Apache instance (activating Apache module) and adding your own fields on each config file. https://www.elastic.co/guide/en/beats/filebeat/master/add-fields.html

A second option could be to use add_process_metadata but I have to admit that I'm not fully sure if it will solve your issue. I haven't tried to use it that way.

A third option could be to use an Ingest node of Elasticsearch with a Pipeline https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html defined to manipulate the incoming string of log data and infer the process that it comes from. You don't need a dedicated ingest node althought it's recommended.

@Mario_Castro

Thanks for the reply.

I'm into day 10 of Elastic Stack understanding.

I've ready all the docs, but I don't understand what you mean.

Are you suggesting I run one instance of Filebeat with all the configurations in it, or one Filebeat per Apache instance each with their own configuration files?

If I have multiple Filebeats then I wont be able to use simple commands like sudo service filebeat start, I'll need to create a service for each customization... or is there another way?

Can you show me what an example setup might look like?

Cheers

From my understanding you can achieve your use case with below steps

  1. Enable apache module on filebeats.
    filebeat modules enable apache2

  2. Write the filebeat config to index the data to ElasticSearch.
    filebeat_apache_config.yml

    output.elasticsearch:
    hosts : [ "localhost:9200" ]
    index : "fb-apache-%{+YYYY.MM.dd}"
    filebeat.config.modules:
    path: ${path.config}/modules.d/.yml
    setup.template:
    name: "fb-apache"
    pattern: "fb-apache-
    "

  3. Now, you can start the filebeats using the below cmd

filebeat -e -M "apache2.access.var.paths=[/srv/host1.com/logs]" -M -c filebeat_apache_config.yml

  1. Now, you can start again a new instance of filebeats by changing the path of log file

filebeat -e -M "apache2.access.var.paths=[/srv/host2.com/logs]" -M -c filebeat_apache_config.yml

  1. If you want to index the data from each log file to different indexes, then you can create different yml config files and use the different files while starting filebeats.
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.