Suricata module not working as expected


#1

I'm trying to use the Filebeat's Suricata module to send logs to Logstash (and then to Kibana) but I'm not receiving them in the correct format. All fields are hava "json." before them. And the logs are tagged as "beats_input_raw_event". I've tried to send the logs without using the module and setting the codec has "json" and that works. Am I missing something? Thanks in advance


(Christian Dahlqvist) #2

Filebeat modules typically do the parsing through an ingest node pipeline that is created during setup. If you connect directly from Filebeat to Elasticsearch this is handled all for you. If you connect through Logstash you must make sure the format is not altered and that the correct pipeline is specified for the Elasticsearch output.


#3

Thank you very much, now I understand. I'll try to do as you said


(Darrell Miller) #4

I'm having the same issue.

When I send filebeat + suricata module --> elastic search, everything works fine.. the proper field names are there.. same thing with the filebeat + zeek modules --> elasticsearch. it works great too.

I need to do some additional enrichment to some of this data, so I send the data to logstash first.
The zeek module continues to work correctly, field names are renamed, and seems to follow the ECS format.. while the suricata stops renaming the fields and doesn't follow the ECS module.

As far as consistency goes.. this doesn't make sense. i'm guessing these modules are written by different people and act differently.

Is there a way to get filebeat + suricata to use the pipeline file or formatting rules before sending it off to suricata?


#5

Hi, it was actually well explained in the doc, but I didn't read it for some reason.
https://www.elastic.co/guide/en/logstash/current/use-ingest-pipelines.html Basically you need to tell him to use the appropriate ingest pipeline in the output section. I did it like this:
filter {
….
}
output {
elasticsearch {
hosts => ["myip:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
pipeline => "%{[@metadata][pipeline]}"
}
}


(Darrell Miller) #6

Greg, thank you for your response.. and that worked for one of my projects.. but i'm still running into some problems with another project.

I've got some remote systems using filebeat to transport bro and suricata logs back to a main ELK stack across town. They are on two different networks.

example:

in a perfect world this is what i need:
[bro+suricata sensor] --> [filebeat w/zeek and suricata module] --> ((internet)) --> logstash --> rabbitmq <--logstash ---> elasticsearch

this doesnt work.. i get that it cant find the pipeline for processing
any suggestions or tips would be greatly appreciated.


#7

Sorry I'm just a student so I don't know much about all this. Are you sure the pipeline was loaded by filebeat? You can do that with filebeat setup --pipelines --modules suricata but you'll have to set the output to elasticsearch and then back to logstash. If I understood correctly elastic needs the pipeline to ingest data from logstash, so I suppose it won't be in the correct format until it is indexed in Elastic. What isn't working exactly? You receive it in the wrong format? Hope this helps,
Bye