I don't need all of these data. I have some problems with storage and I need a way to filter these logs to import just important data.
I know, with logstash I can do it, but is there anyway to do with filebeat?
my ELK version: 8.8.2
filebeat version: 8.8.2
I use nginx module with this configuration
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-nginx.html
- module: nginx
# Access logs
access:
enabled: true
var.paths: ["/data/logs/nginx/access.log*", "/data/logs/nginx/postdata-access.log*"]
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Error logs
error:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
ingress_controller:
enabled: false
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
Easiest but perhaps not best long term : Add a remove processor to the end of the existing pipeline
Clone the pipeline and then add the remove processor to your customer pipeline and use that pipeline in the module Here are detailed instructions for that ... but for you instead of all the grok stuff you would just add the remove processor as the last processor..
So for Number 1)
Go to Kibana - Stack Management - Ingest Pipeline
Find the nginx access pipeline - 1st Clone it for a backup, then Edit the original pipeline
at the bottom of the normal processors add a remove processor and list the fields you want to remove... IMPORTANT don't forget to save the processor and pipeline
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.