That would be my first topic generally about elk stack

Hello,

Don't know if I choose right category but need some answers about elk stack:

Can I find online elk stack emulator for config testing
How much memory ELK stack will drain for small network monitoring suricata logs
Do I need to add json plugin in logstash like you put in docker file example
Can I add another conf file AND/OR different name in logstash/pipeline
Where can I find rules for configuring suricata with kibana

Can you check someone code for me bellow;

I have remote suricata on which plan to install filebeats and send logs to logstash

This is my conf

ON Filebeat side

prospectors:

    - input_type: log
      paths:
        - /var/log/suricata/eve.json       
        document_type: SuricataIDPS 
    output:
    logstash:
    hosts:
    - logstash.hostname:["xxx.xxx.xxx.xxx:5044"] 

ON logstash side

input {
beats {
port => 5044
codec => json
}
}
}
filter {
if [type] == "SuricataIDPS" {
date {
match => [ "timestamp", "ISO8601" ]
}
ruby {
code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
}
}

if [src_ip] {
geoip {
source => "src_ip"
target => "geoip"
#database => "/etc/logstash/GeoLite2-City.mmdb"
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
if ![geoip.ip] {
if [dest_ip] {
geoip {
source => "dest_ip"
target => "geoip"
#database => "/etc/logstash/GeoLite2-City.mmdb"
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}

You will likely have better results in these forums by sticking to one issue per thread, and using the subject field as a way of succinctly summarising your issue; this helps those browsing the threads to know whether or not they can be helpful to you without requiring that they read every word in your post.

I'll attempt to answer a couple of your questions to get you started:

No, but you can download individual components of the Elastic Stack to your local computer, and run locally for testing. Despite components like Elasticsearch supporting massive scale, they work just fine for smaller datasets as single-node clusters running in the same hardware as other Elastic Stack components.

The answer here, as is often the case, is "it depends" (or, more likely "small" for you may not have the same meaning as "small" to someone else); I regularly set up test clusters with all components of the Elastic Stack on my laptop, and although it splits its 16GB of memory with my IDE and other local tools, it has no problem keeping up.

Maybe -- there are actually multiple json plugins, so it really depends on (a) the shape of your inbound data, (b) the desired shape of your outbound data (especially if using an output from Logstash in addition to Elasticsearch), or (c) if any of the data with which you enrich is in a JSON form.

Logstash supports multiple pipelines, and each can have its own distinct identifier.

This may be a good forum thread in its own; I personally don't know much about Suricata, but I'm sure if you posted a thread that was entirely about Suricata, the people who do know about it would be more likely to see it and respond.

I would also advise avoiding using the Ruby filter unless it's really needed. In this case, the mutate filter has string-splitting directive that is going to be more performant, as well as more maintainable in the long term.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.