Is it possible to use processors and fields from Filebeat modules and write the filtered out events through the output.file module ? I am hoping filebeat should be flexible enough to do this. I don't want to display in Kibana for now.
But this isn't dropping any events where Response code is 200.
Do I have to configure paths for both prospector and processor ?
I could not find few fluid examples. Do you think there is room for some small yet explanatory examples like this one in the documentation ?
What i understand from the documentation is that I can use processors instead of using regex with prospectors to filter events out.
For that filtering to happen do I need ingest node feature from elasticsearch ? or say grok filter from logstash ?
Quote from the page "
You define processors in the filebeat.yml file to filter and enhance the data before sending events to the configured output." suggests that Filebeat should be able to use these modules and filter them out before sending to ingest node feature or logstash ?
I need to filter out DNS query logs for some specific domains. It would be intensive to ship out the entire DNS query logging to logstash/elasticsearch for filtering. I want to do it Filebeat and not write complex regular expressions for prospectors.
I would rather just morph apache2/nginx or anyother filebeat module to achieve this.
But is this even possible in the first place? Or are the grok expressions that I see in Filebeat modules
(here https://github.com/elastic/beats/blob/master/filebeat/module/apache2/access/ingest/default.json) intended for ingest node feature of elasticsearch ? and even with modules installed, all the logs will be shipped over to elasticsearch and elasticsearch/ingest does the filtering there ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.