Filebeat Modules and Parsing, with Logstash - Cisco Related

From what I can glean from https://www.elastic.co/guide/en/logstash/current/use-ingest-pipelines.html and Problems parsing Cisco ASA logs using filebeat, using the filebeat cisco module with logstash successfully looks like this: Correct me if I am wrong.

  1. You must load the filebeat cisco ingent pipelines from a filebeat system direct to elasticsearch, using filebeat setup --pipelines --modules cisco. For this step, you likely have to break your existing logging from that system in order to do this one-time configuration. Afterward, you can revert back to your previous filebeat config.
  2. Configure filebeat to output to logstash. Configure logstash input as "beats" to read in.
  3. Then, on your logstash server, Set the pipeline option to %{[@metadata][pipeline]} in the output stanza. This setting configures Logstash to select the correct ingest pipeline based on metadata passed in the event.

Questions/Comments:

  1. So....All the parsing of a filebeat "module" seems to be done either at the logstash or elasticsearch side. It's not quite clear which but certainly does not appear to be at the filebeat side. This begs the question of "whats the point of calling this a filebeat module, when filebeat doesnt do any of the work?"
  2. This seems to REQUIRE that logstash output to elasticsearch, right? What if that is not the use case, and instead we want to output somewhere else first, like S3 storage? Are we just out of luck and cannot use these modules?
  3. Why is this so overly complicated? Why isnt it just a setting you enable in one place and dont have to jump thru all these hoops?

+1. I have the same questions.

@mgotechlock did you succeed to make it works ? On my side I didn't using this configuration :

Filbeat => Logstash (only act as a simple inpout/ouput gateway) => Kafka => Logstash (for event transformation) => Elasticsearch

Please do not ping people not already involved in the thread.

Sorry Christian I fixed it

As far as I know most processing takes place in Elasticsearch ingest pipelines which means you need to send the data to Elasticsearch in order to use the modules.

Thank you Christian.

I solved my issue (Filbeat => Logstash (only act as a simple input/ouput gateway) => Kafka => Logstash (for event transformation) => Elasticsearch), put it here if it can help :

  • On the logstash which act as a simple input/ouput gateway :

Filebeat.conf :

input {
  beats {
    port => 5044
  }
}

filter {
  mutate {
 #copy metadata to keep them in the transformation logstash as metadata are not sent in the output
     copy =>   { "@metadata" => "metacopy" }
  }
}

Kafka.conf :

output {
  kafka 
    topic_id => ["kafka"]
    bootstrap_servers => "sever1:9092,server2:9092,server3:9092"
    codec => json
  }
}
  • On the logstash for events transformation :

main_pipeline.conf :

input {
  kafka {
    topics => ["kafka"]
    bootstrap_servers => "sever1:9092,server2:9092,server3:9092"
    consumer_threads => 2
    codec => json
    decorate_events => true
  }
}

filter
{
}

output {
  if [metacopy][pipeline] {
    elasticsearch {
      hosts => "https://elastic:9200"
      manage_template => false
      index => "%{[metacopy][beat]}-%{[metacopy][version]}-%{+YYYY.MM.dd}"
      pipeline => "%{[metacopy][pipeline]}"
      user => "elastic"
      password => "password"
    }
  } else {
    elasticsearch {
      hosts => "https://elastic:9200"
      manage_template => false
      index => "other"
      user => "elastic"
      password => "password"
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.