I'm new to elastic. I'm using filebeat v8.5 and I have enabled some modules example: IIS, Checkpoint and few others which are working great. I also would like to create a new filebeat module for a specific device which is able to send syslog-JSON. Such module doesn't exists (or at least I couldn't find it). What I don't know is how the configured modules will make use of the right ingest-pipeline on Elastic. What I found , for example the IIS module config, are the config details of pipeline that will be used when execute the filebeat.exe with the 'setup' command but I couldn't find where such pipeline is defined in the module config itself.
First... What version are you on? (Always include that)
You will not create an actual module but configure some of the same components. You will not typically need to run the setup command for this because you will already define everything.
The Overview Process is
Configure Your Input: (syslog, TCP, UDP not sure which is the right one for you, probably TCP or UDP)
Input Example UDP and note all the common options are
filebeat.inputs:
- type: udp
max_message_size: 10KiB
host: "localhost:514"
pipeline: "my-pipeline" <!-- This will call your pipeline
If I were you I would get the input working first without a pipeline ... see what the data looks like and then work on the ingest pipeline. Pipelines take a few minutes of learning, but once you get the hang you will be fine. There are ways to test / simulate them etc.
Hope this helps you get started, ... so get started and come back with some more questions.
When you come back show us samples of your data and what you want and perhaps we can help.
Thanks @stephenb it has actually worked as per your example. However I still don't understand how filebeat module would decide which ingest pipline to use.
For example: The CEF module is configured on Filebeat. And on Elasticsearch I can see 3 different pipeline related to CEF. Which one is being used and how it can be changed?
Basically the module just internally does what I showed.. it set the pipeline..
I have not looked specifically at the CEF module the most likely when you see three like that it calls the base pipeline and then with some processing it figures out if it needs to call one or both of the other pipelines.
You certainly can copy pipelines and edit them in Kibana
If you give samples of your audit logs, perhaps we can help.
There is a syslog input that can help parse the message.
Then you would just define the pipeline like I showed you and start to build it out
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.