Parsing multiple files with Filebeat

Dear all,
I have several JSON files that I wish to parse with Filebeat; and I read the following:
https://stackoverflow.com/questions/39983918/can-filebeat-use-multiple-config-files
I used the code in the above solution for configuring my filebeat.yml , it looks like the following

filebeat.inputs:
- type: log
  enabled: true
  path: inputs.d/*.yml

output.elasticsearch:
  hosts: ["localhost:9200"]

and created a directory inputs.d at /etc/filebeat/ to put in all the individual yml files for parsing each JSON file.
However, when I run filebeat, nothing happened -- there's no output or whatsoever in ES.

I tried to bypass this issue and look for alternative solutions, just like in this pagehttps://blog.csdn.net/shgh_2004/article/details/98650114
(You don't need to understand chinese to read the yml.)
in this case, the author combined all of the yml files in the /etc/filebeat/filebeat.yml. However, I was concerned about how to put in the processors, as I have some fields that I wish to drop for each of the JSON files.

Each of the yml files looked like this


filebeat.inputs:
- type: log
  enabled: true
  paths:
    -  ...
 
processors:
 - decode_json_fields:
     fields: ['message']
     target: ''
     overwrite_keys: true

 - drop_fields:
     fields: [....]


filebeat.shutdown_timeout: 5s
setup.template.enabled: false
setup.ilm.enabled: false

output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "..."
 

Thank you in advance for your attention :slight_smile:

1 Like

I think you're misunderstanding how that works.

path is;

A list of glob-based paths that will be crawled and fetched. All patterns supported by Go Glob are also supported here.

It's literally the path of the files that you want Filebeat to process, not a list of configs to read to then process files elsewhere.

Just to be clear, you want to have specific processing rules for specific sets of json files?

2 Likes

Dear Mark,
Thank you for your clarification. is there a ways I can use several yml files in one filebeat run? Or, do I have to run multiple filebeat instances?

Yes, that's what I wished to do. The JSON files have different structures, and in each of them, there're different fields I want to drop. (Though, I am not sure if I just write all of the fields I want to drop from all files together in one statement, is that going to work?...)
Thanks again :slight_smile:

Try Load external configuration files | Filebeat Reference [7.12] | Elastic

Dear Mark,
I think you mean this?

filebeat.config.inputs:
  enabled: true
  path: inputs.d/*.yml

output.elasticsearch:
  hosts: ["localhost:9200"]

I tried it but still nothing happened. I thought it was because I wrote filebeat.inputs: instead of filebeat.config.inputs:, but it doesn't make a difference after I changed it. (now the path should be correct?)

I used the following to get filebeat running

#!/usr/bin/env bash

# Script to run Filebeat in foreground with the same path settings that
# the init script / systemd unit file would do.

rm -rf /var/lib/filebeat/registry/
exec /usr/share/filebeat/bin/filebeat  -e -c /etc/filebeat/filebeat.yml -d "publish" \
  --path.home /usr/share/filebeat \
  --path.data /var/lib/filebeat \
  --path.logs /var/log/filebeat \
  "$@"

What does your external config file look like?

Dear Alex,
I listed the external yml at the bottom of original post.

Then your external config files are invalid. They can only have input configs. See the example here of what should be there, Load external configuration files | Filebeat Reference [7.12] | Elastic. the output config and other settings must be in the main config file.

2 Likes

Dear Alex,
I see... Thank you very much indeed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.