Run multiples pipelines fail

Hello !

I try to run multiples pipelines at the same time but when I run logstash i've got this error :

 WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-04-23 14:36:26.864 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[INFO ] 2018-04-23 14:36:26.870 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
ERROR: Failed to read pipelines yaml file. Location: /usr/share/logstash/config/pipelines.yml
usage:
  bin/logstash -f CONFIG_PATH [-t] [-r] [] [-w COUNT] [-l LOG]
  bin/logstash --modules MODULE_NAME [-M "MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE"] [-t] [-w COUNT] [-l LOG]
  bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace] [-w COUNT] [-l LOG]
  bin/logstash -i SHELL [--log.level     fatal|error|warn|info|debug|trace]
      bin/logstash -V [--log.level fatal|error|warn|info|debug|trace]
      bin/logstash --help
    [ERROR] 2018-04-23 14:36:27.156 [LogStash::Runner] Logstash - java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit

my pipelines.yml file is in /etc/logstash and looks like :

- pipeline.id: malwarebytes
  path.config: "/home/XXX/pipelinefilebeatmalwarebytes.conf"
- pipeline.id: ironport
  path.config: "/home/XXX/pipelinefilebeatironport.conf"

I can't understand my error... I read that many errors were due to forgetfulness of " etc ... But I think here my config file is good ??

PS : To run Logstash, I go to /usr/share/logstash and then I execute :
bin/logstash without any options

If your configuration files are in /etc/logstash then run logstash using

bin/logstash --path.settings /etc/logstash

Hey @Badger

Thanks for your answer, it works fine

But now, I'm stuck with another problems :
I've got one pipeline for my .csv malwarebytes files and another one for my .csv ironport files.

Filebeat is listening /home/log/*.csv

But now my problem :
How to make it clear that my csv of malwarebytes must be parsed with my pipeline malwarebytes and my ironport csv with ironport pipeline .. I do not know how to do this differentiation

There are many ways to do this. One would be to use two more specific regexps for the file names and the use 'tags' in filebeat to add a tag, and conditionals in the pipelines to only process events that have the right tags.

I'm not sure to understand very well sorry lol i'm beginner on ELK.

For the moment i've got 2 pipelines :

pipelinefilebeatmalwarebytes.conf looks like :

input {
  beats {
    port => 5044
  }
}
filter {
  csv {
     separator => ";"
     columns => ["Name","Status","Category","Type","EndPoint","Group","Policy","Scanned At","Reported At","Affected Application"]
  }
}
output {
   elasticsearch {
     hosts => "http://localhost:9200"
     index => "malwarebytes-report"
  }
stdout{}
}

And pipelinefilebeatironport.conf looks like

input
{
  beats
  {
    port => 5044
  }

}
filter
{
  csv
  {
    separator => ","
    columns => ["Horodateur de debut","Horodateur de fin","Date de debut","Date de fin","Bloque par le filtrage par reputation","Bloque comme destinataires non valides","Spam detecte","Virus detecte","Detecte par la protection avancee contre les programmes malveillants","Messages contenant des URL malveillantes","Bloque par filtre de contenu","Arrete par DMARC","Marketing","Reseaux sociaux","En masse","La verification/le decodage S/MIME a echoue","La verification/le decodage S/MIME a reussi","Messages sains"]
    convert =>
        {
            "Bloque par le filtrage par reputation" => "float"
                "Bloque comme destinataires non valides" => "float"
                "Spam detecte" => "float"
                "Virus detecte" => "float"
                "Detecte par la protection avancee contre les programmes malveillants" => "float"
                "Messages contenant des URL malveillantes" => "float"
                "Bloque par filtre de contenu" => "float"
                "Arrete par DMARC" => "float"
                "Marketing" => "float"
                "Reseaux sociaux" => "float"
                "En masse" => "float"
                "La verification/le decodage S/MIME a echoue" => "float"
                "La verification/le decodage S/MIME a reussi" => "float"
                "Messages sains" => "float"
        }
  }
  if [message] =~ /^Horodateur/
  {
   drop {}
  }
  date
  {
   match => [ "Horodateur de debut", "UNIX"]
  }
}
output
{
  elasticsearch
  {
    hosts => "http://localhost:9200"
    index => "ironport-monitoring"
  }
  stdout{}
}

And I send my CSV files into /home/log directory.

And, as you know, I want that when I send a .csv malwarebytes file , the "pipelinefilebeatmalwarebytes.conf" run, and when I send a .csv ironport file the "pipelinefileabeatironport.conf" run

But I don't know regexp etc :confused:

What does the filebeat.yml look like? Do the two types of files have different names?

My filebeat.yml file looks like (without comments lines) :

filebeat.prospectors:

  • type: log
    enabled: true
    paths:
    • /home/log/.csv
      filebeat.config.modules:
      path: ${path.config}/modules.d/
      .yml
      reload.enabled: true
      setup.template.settings:
      index.number_of_shards: 3
      setup.kibana:
      output.logstash:
      hosts: ["localhost:5044"]

And yes, theses two types of files have different names

You could do something like this:

  - type: log
    paths: /home/logs/type1.csv
    fields_under_root: true
    fields:
      filetype: type1

  -type: log
    paths: /home/logs/type2.csv
    fields_under_root: true
    fields:
      filetype: type2

Then run a pipeline that routes events to the other pipelines

output {
  if [filetype] == "type1" {
         tcp { host => "127.0.1.1" port => 7777 } 
  } else if  [filetype] == "type2" {
         tcp { host => "127.0.1.2" port => 7777 } 
  } else {
    [Some other output]
  }
}

and yes, I use the same port on different loopback IP addresses.

Thx for all it works fine !

For people which could have the same problem, this is my filebeat.yml conf :

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /home/log/malwarebytes/*.csv
  fields_under_root: true
  fields:
   filetype: malwarebytes
- type: log
  paths: /home/log/ironport/*.csv
  fields_under_root: true
  fields:
   filetype: ironport

And then, I just merge my two pipelines into only one and I do the distinction into this one like that :

 if [filetype] == "ironport"
 {
  elasticsearch
  {
    hosts => "http://localhost:9200"
    index => "ironport-monitoring"
  }
 }
 else if [filetype] == "malwarebytes"
 {
  elasticsearch
  {
     hosts => "http://localhost:9200"
     index => "malwarebytes-report"

Thx @Badger

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.