Team, i have a requirement where i need to send audit, auth and syslog from servers to elasticsearch directly and application's log to logstash. here are the changes and steps i performed. I am using ubuntu 16 in our setup.
- after installation of filebeat. i replicated /etc/filebeat folder as /etc/filebeat1.
- then did necessary changes in filebeat.yml as below.
filebeat.config.prospectors:
enabled: true
path: /etc/filebeat/conf.d/*.yml
reload.enabled: true
reload.period: 5s
setup.template.name: '{CLOUD_APP}'
setup.template.pattern: '{CLOUD_APP}-*'
setup.template.settings:
index.number_of_shards: 1
index.number_of_replicas: 1
output.elasticsearch:
hosts: ['https://es_ip:19200']
username: 'abc'
password: 'yoyo'
indices:
- index: 'auth-multi-%{+YYYY.MM.dd}'
when.contains:
source: '/var/log/auth.log'
- index: 'audit-multi-%{+YYYY.MM.dd}'
when.contains:
source: '/var/log/audit/audit.log'
- index: 'syslog-multi-%{+YYYY.MM.dd}'
when.contains:
source: '/var/log/syslog'
another configuration as below under /etc/filebeat1
filebeat.config.prospectors:
enabled: true
path: /etc/filebeat1/conf.d/*.yml
reload.enabled: true
reload.period: 5s
setup.template.name: 'elk'
setup.template.pattern: 'elk-*'
setup.template.settings:
index.number_of_shards: 1
index.number_of_replicas: 1
output.logstash:
when:
contains:
source: '/home/elk/elasticsearch_logs/Hotdata-node/logs/Demo.log'
hosts: ["xyz:5044"]
- I also configured 2 service as filebeat.service and filebeat1.service and was able to successfully push the logs in elasticsearch.
My concerns:
a. Is it right method in case we want to push logs to multiple output?
b. what all factor i need to consider if i place above configurations in production. Will running multiple instances of filebeat can cause any resource impact on the server on long run?