and created a directory inputs.d at /etc/filebeat/ to put in all the individual yml files for parsing each JSON file.
However, when I run filebeat, nothing happened -- there's no output or whatsoever in ES.
I tried to bypass this issue and look for alternative solutions, just like in this pagehttps://blog.csdn.net/shgh_2004/article/details/98650114
(You don't need to understand chinese to read the yml.)
in this case, the author combined all of the yml files in the /etc/filebeat/filebeat.yml. However, I was concerned about how to put in the processors, as I have some fields that I wish to drop for each of the JSON files.
Dear Mark,
Thank you for your clarification. is there a ways I can use several yml files in one filebeat run? Or, do I have to run multiple filebeat instances?
Yes, that's what I wished to do. The JSON files have different structures, and in each of them, there're different fields I want to drop. (Though, I am not sure if I just write all of the fields I want to drop from all files together in one statement, is that going to work?...)
Thanks again
I tried it but still nothing happened. I thought it was because I wrote filebeat.inputs: instead of filebeat.config.inputs:, but it doesn't make a difference after I changed it. (now the path should be correct?)
I used the following to get filebeat running
#!/usr/bin/env bash
# Script to run Filebeat in foreground with the same path settings that
# the init script / systemd unit file would do.
rm -rf /var/lib/filebeat/registry/
exec /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish" \
--path.home /usr/share/filebeat \
--path.data /var/lib/filebeat \
--path.logs /var/log/filebeat \
"$@"
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.