Filebeat modules rely on Elasticsearch ingest pipelines to parse and process the data. If you write directly to Elasticsearch this is automatically set up, but this is as far as I know not automatically done when sending through Logstash (as per the node on the page I linked to). It is therefore likely that your log stash config will need to change in order to direct the data to the correct ingest pipeline.
Your mean is i need change output to ES ingest node ?
So i will use grok filter in logstash because i have many file log need to parse in filebeat.prospectors config
And i run debug filebeat, i saw error log
ERR Not loading modules. Module directory not found : /usr/share/filebeat/bin/module
Exactly, by default, module directory is : /usr/share/filebeat/module. (Centos)
How to change directory in config.
Thanks!
I have same problem with same config.
path.home don't help...
`[root@elastic filebeat]# ./bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat/
2017/05/17 12:30:56.293475 beat.go:285: INFO Home path: [/usr/share/filebeat/] Config path: [/usr/share/filebeat/] Data path: [/usr/share/filebeat//data] Logs path: [/usr/share/filebeat//logs]
2017/05/17 12:30:56.293524 beat.go:186: INFO Setup Beat: filebeat; Version: 5.4.0
2017/05/17 12:30:56.293593 metrics.go:23: INFO Metrics logging every 30s
2017/05/17 12:30:56.293664 logstash.go:90: INFO Max Retries set to: 3
2017/05/17 12:30:56.293748 outputs.go:108: INFO Activated logstash as output plugin.
2017/05/17 12:30:56.293842 publish.go:295: INFO Publisher name: elastic.sbr.local
2017/05/17 12:30:56.294097 async.go:63: INFO Flush Interval set to: 1s
2017/05/17 12:30:56.294112 async.go:64: INFO Max Bulk Size set to: 2048
2017/05/17 12:30:56.315062 beat.go:221: INFO filebeat start running.
2017/05/17 12:30:56.315100 filebeat.go:81: WARN Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2017/05/17 12:30:56.315134 registrar.go:85: INFO Registry file set to: /usr/share/filebeat/data/registry
2017/05/17 12:30:56.315163 registrar.go:106: INFO Loading registrar data from /usr/share/filebeat/data/registry
2017/05/17 12:30:56.316462 registrar.go:123: INFO States Loaded from registrar: 8
2017/05/17 12:30:56.316500 crawler.go:38: INFO Loading Prospectors: 4
2017/05/17 12:30:56.316624 prospector_log.go:65: INFO Prospector with previous states loaded: 3
2017/05/17 12:30:56.316715 prospector.go:124: INFO Starting prospector of type: log; id: 17005676086519951868
2017/05/17 12:30:56.316954 prospector_log.go:65: INFO Prospector with previous states loaded: 5
2017/05/17 12:30:56.317090 prospector.go:124: INFO Starting prospector of type: log; id: 4384193151192871875
2017/05/17 12:30:56.317227 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2017/05/17 12:30:56.317334 prospector.go:124: INFO Starting prospector of type: log; id: 3977614963612598612
2017/05/17 12:30:56.317464 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2017/05/17 12:30:56.317534 prospector.go:124: INFO Starting prospector of type: log; id: 12958680918179246529
2017/05/17 12:30:56.317546 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 4
2017/05/17 12:30:56.317560 registrar.go:236: INFO Starting Registrar
2017/05/17 12:30:56.317595 sync.go:41: INFO Start sending events to output
2017/05/17 12:30:56.317634 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/05/17 12:30:56.318264 log.go:91: INFO Harvester started for file: /var/log/secure
Sorry @eds because i cant help. I remove module system. I dont use Elasticsearch ingest pipelines to parse and process the data. if u want to use module in filebeat, u need to set output with "Elasticsearch". With my case i need send log to logstash, some log type need to parse.
In here, i want monitoring log audit, authen , i used Wazuh. Wazuh will be parse log automatic and send clear log to Elasticsearch cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.