We are using Logstash to ship logs to a cloud based SIEM. We'd like to utilize Filebeat to assist in the parsing, however it seems that Filebeat only provides the ingest pipelines for Elasticsearch. The ingest-convert.sh is supposed to convert ingest files to Logstash configs, but it expects JSON, while all the Filebeat modules utilize YAML. Is there an alternative to this converter or a workaround?
I would say that the workaround would be load those ingest pipelines in a Elasticsearch server, get the json of the ingest pipeline and use the converter.
Where is the JSON file generated? Do I need to receive logs in order to generate it, or is it enough to run a module test?
You need to configure filebeat to connect to an Elasticsearch cluster first, so it will be able to install the ingest pipelines.
Then you can get the json of the pipeline using the API.
Use GET /_ingest/pipeline
to list the pipelines and GET /_ingest/pipeline/<pipeline-name>
to get the specific pipeline.
Thanks. Is there documentation where I can find how to do this?
I've set Elasticsearch as the output in /etc/filebeat/filebeat.yml, configured the relevant filebeat modules, loaded the pipelines with filebeat setup --pipelines --modules
and restarted the filebeat service. But the API is returning { }
for /_ingest/pipeline
.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.