Filebeat new module - how to complete

Trying to create additional module to filebeat (some application log); followed the manual, but not sure how to complete the task after creating the fields..
How do I move it into a filebeat docker container?

Can you share a bit more details on what you did so far including which versions of Filebeat you use?

Sure.
Followed the steps in 'Creating new module' of filebeat (version 6.2.2):

  1. Installed Go. 2. cloned filebeat from Github. 3. make create module, create fileset, create fields...
    At this point I'm stuck regarding the next step...
    I'm using docker container of filebeat6.2.2, and would like to put there the new module (is it feasible?)
    So I copied the new module folder to this container, commited it, and enabled the new module in filebeat.yml - However it's not working
    So I installed the pipeline.json directly to Elastic (by curl -XPUT 'localhost:9200/_ingest/pipeline/filebeat-6.2.2-leo-access-pipeline?pretty' - leo is my new module name)

If you have generated successfully a fields.yml file, please first check if the names of the fields are correct. As names can take nearly any forms in Ingest pipelines, it's possible that something went wrong. Also, if you are not opening a PR, feel free to remove the description and example fields from the list.

If you are sure that your fields.yml is correct, you should run make update in the root of Filebeat. This generates Kibana index patterns for you. The generated files must be deployed to Elasticsearch and Kibana using filebeat setup. It can be done on the host of the container.

Also, make sure that you haven't already sent the messages you want to forward. Filebeat does not reread messages. Pipelines and patterns only become available and deployed when Filebeat sends messages. If that's the case, delete data/registry file, so every message is resent again. But if you don't want to send the messages again, you should generate more.

Hi,
Not sure what do you mean by: 'filebeat setup. It can be done on the host of the container'
Should I copy something to the filebeat container itself? All make,setup are done on different machine?

Where did you cloned the Beats repository and ran the commands you listed? Is it inside the container? Or is it on the host running the container or a different host? Is it outside of the container?

Outside the container - in a host running the container

Then you should run make update there. And have filebeat setup and deploy the required files using the command ./filebeat setup.
This command deploys dashboards, ML jobs and Kibana index pattern. In your case the Kibana index pattern is the key component which should be deployed.

So I have to clone filebeat, and install GO inside the container?
No other way to do it in the development environment and copy/create container afterwards?
What about the ingest_node pipeline? will it be deployed?

You don't need to install the dev env inside the container.

You can do everything on the host, including make create-module, make create-fileset, developing an Ingest pipeline, running make create-fields and then running make update. Copy the generated module to the module folder. Also, copy the generated Kibana index patterns and fields.yml under _meta.

If you are done with copying everything to the right directory, you should run ./filebeat setup in the container. This deploys the Kibana index pattern. Then run ./filebeat to send your logs to Elasticsearch. If messages are sent, Kibana index template can be configured. Also, the pipeline is deployed. If your pipeline does not seem to be updating, try running ./filebeat --update-pipelines. This forces Filebeat to update every pipeline, is something is set. Please, don't use this flag outside module development.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.