Suricata module not working as expected

I'm trying to use the Filebeat's Suricata module to send logs to Logstash (and then to Kibana) but I'm not receiving them in the correct format. All fields are hava "json." before them. And the logs are tagged as "beats_input_raw_event". I've tried to send the logs without using the module and setting the codec has "json" and that works. Am I missing something? Thanks in advance

Filebeat modules typically do the parsing through an ingest node pipeline that is created during setup. If you connect directly from Filebeat to Elasticsearch this is handled all for you. If you connect through Logstash you must make sure the format is not altered and that the correct pipeline is specified for the Elasticsearch output.

1 Like

Thank you very much, now I understand. I'll try to do as you said

I'm having the same issue.

When I send filebeat + suricata module --> elastic search, everything works fine.. the proper field names are there.. same thing with the filebeat + zeek modules --> elasticsearch. it works great too.

I need to do some additional enrichment to some of this data, so I send the data to logstash first.
The zeek module continues to work correctly, field names are renamed, and seems to follow the ECS format.. while the suricata stops renaming the fields and doesn't follow the ECS module.

As far as consistency goes.. this doesn't make sense. i'm guessing these modules are written by different people and act differently.

Is there a way to get filebeat + suricata to use the pipeline file or formatting rules before sending it off to suricata?

Hi, it was actually well explained in the doc, but I didn't read it for some reason.
https://www.elastic.co/guide/en/logstash/current/use-ingest-pipelines.html Basically you need to tell him to use the appropriate ingest pipeline in the output section. I did it like this:
filter {
….
}
output {
elasticsearch {
hosts => ["myip:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
}

Greg, thank you for your response.. and that worked for one of my projects.. but i'm still running into some problems with another project.

I've got some remote systems using filebeat to transport bro and suricata logs back to a main ELK stack across town. They are on two different networks.

example:

in a perfect world this is what i need:
[bro+suricata sensor] --> [filebeat w/zeek and suricata module] --> ((internet)) --> logstash --> rabbitmq <--logstash ---> elasticsearch

this doesnt work.. i get that it cant find the pipeline for processing
any suggestions or tips would be greatly appreciated.

Sorry I'm just a student so I don't know much about all this. Are you sure the pipeline was loaded by filebeat? You can do that with filebeat setup --pipelines --modules suricata but you'll have to set the output to elasticsearch and then back to logstash. If I understood correctly elastic needs the pipeline to ingest data from logstash, so I suppose it won't be in the correct format until it is indexed in Elastic. What isn't working exactly? You receive it in the wrong format? Hope this helps,
Bye

i found a solution, i dunno if its the best solution.. but it does work.

my workflow is this:

filebeat sends suricata and bro/zeek logs to logstash
logstash sends it to RabbitMQ (or any other message queue)
later logstash pulls data out of rabbitMQ and does some enrichment, then outputs the data in ES.

it works fine without the rabbbitMQ, but when that is added ES doesnt know what pipeline to use.
what i've figured out is the pipeline information is stored in @metadata.pipeline. When it goes into RabbitMQ, anything under @metadata is not saved. thats temporary information.

So in the first Logstash->rabbitMQ step, i save the @metadata.pipeline info into a field that IS saved. I create a new field called pipeline.

Then when logstash pulls the info out of rabbitMQ and does enrichment, i copy that data from the new pipeline field back to @metadata.pipeline. Everything works fine if this is done.

i hope that makes sense.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.