Migrating from Graylog to Logstash (SOLVED)

Hello,

At the moment our cluster is composed of:

  • 4 elasticsearch nodes
  • 3 graylog instances for the logs processing/extractors
  • Kibana for the visualization (graph, dashboard)

I want to upgrade our elastic nodes to 5.0, but unfortunately, graylog isn't supporting it yet.
I am planning of using Logstash instead, and have a few questions:

  1. Can't we start one logstash for all the inputs we wanna have ? I saw that we can configure multiple inputs and outputs in one files but I don't understand how we can define filter to be applied on specific input logs ? Also, are we forced to use the /bin/logstash command everytime, or is there a way to tell logstash to start automatically for all the inputs ?

  2. I spent a lot of time in graylog to make extractors, and create all the fields. Graylog has a json export of all extractors, which looks like this :

    {
    "extractors": [
    {
    "title": "Squid3 response",
    "extractor_type": "regex",
    "converters": [],
    "order": 14,
    "cursor_strategy": "copy",
    "source_field": "message",
    "target_field": "squid3_response",
    "extractor_config": {
    "regex_value": "pamandzi squid3:.+ [0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3} ([^/]+/[0-9]+) [0-9]{1,10}"
    },
    "condition_type": "string",
    "condition_value": "pamandzi squid3"
    },
    {
    "title": "Squid3 IP client",
    "extractor_type": "regex",
    "converters": [],
    "order": 16,
    "cursor_strategy": "copy",
    "source_field": "message",
    "target_field": "squid3_ip_client",
    "extractor_config": {
    "regex_value": "pamandzi squid3:.+ ([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}) [^/]+/[0-9]+ [0-9]{1,10}"
    },
    "condition_type": "string",
    "condition_value": "pamandzi squid3"
    },

My question is : will I be able to reuse those regex somehow with logstash ?

  1. Will I have to relaunch Logstash every time I want to add an input or modify something ?

Thanks!

Answers:

  1. People do run with multiple inputs. Usually they set the "type" to a string e.g. apache, then in the filter section of the config you would "guard" a filter with an if statement.
    e.g.
input {
  file {
    ....
    type => "apache"
  }
}
filter {
  if [type] == "apache" {
    <apache specific filters here>
  } else if [type] == other {
     
  } else {
    
  }
}
  1. You might be able to, but our Grok filter comes with many patterns built in.
  2. In Logstash 5.0 (released today) there is a config reload feature which will sense that the config has been edited and do the internal reloading for you.

Thanks for these answers, it helped lots !

Cheers.