Logstash saves to two different indexes when adding netflow module

I enabled netflow module in logstash 5.6.3 several days ago and today started to stream the flows into logstash. Before that I had collected the logs via filebeat already. What I noticed is that filebeat's stuff appeared inside netflow index.

I wonder where I can check what goes wrong?

root@kibana:/etc/logstash/conf.d# cat input-filebeat.conf
input {
  beats {
    port => 5443
    ssl => true
    ssl_certificate_authorities => ["/etc/logstash/ca.crt"]
    ssl_certificate => "/etc/logstash/kibana.cert.pem"
    ssl_key => "/etc/logstash/kibana.key.p8"
    ssl_verify_mode => "force_peer"
  }
}

root@kibana:/etc/logstash/conf.d# cat output-elasticsearch.conf
output {
  elasticsearch {
    hosts => ["http://127.0.0.1:9200"]
    manage_template => false
    #index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Seems like the reason is the next: When logstash activates netflow module - it merges the configuration from its template with the configuration which already exists.

In the /usr/share/logstash/modules/netflow/configuration/logstash/netflow.conf.erb there is a block of:

output {
<%= elasticsearch_output_config() %>
}

Andd if you look at the function definition at /usr/share/logstash/logstash-core/lib/logstash/modules/logstash_config.rb, you can see:

  def elasticsearch_output_config(type_string = nil)
    hosts = array_to_string(get_setting(LogStash::Setting::SplittableStringArray.new("var.elasticsearch.hosts", String, ["localhost:9200"])))
    index = "#{@name}-#{setting("var.elasticsearch.index_suffix", "%{+YYYY.MM.dd}")}"
    user = @settings["var.elasticsearch.username"]
    password = @settings["var.elasticsearch.password"]
    lines = ["hosts => #{hosts}", "index => \"#{index}\""]
    lines.push(user ? "user => \"#{user}\"" : nil)
    lines.push(password ? "password => \"#{password}\"" : nil)
    lines.push(type_string ? "document_type => #{type_string}" : nil)
    lines.push("ssl => #{@settings.fetch('var.elasticsearch.ssl.enabled', false)}")
    if cacert = @settings["var.elasticsearch.ssl.certificate_authority"]
      lines.push("cacert => \"#{cacert}\"") if cacert
    end
    # NOTE: the first line should be indented in the conf.erb
    <<-CONF
elasticsearch {
    #{lines.compact.join("\n    ")}
    manage_template => false
  }
CONF
  end

This guy puts another one configuration for elasticsearch and throws everything into the index "netflow-%{+YYYY.MM.dd}".

As a temporary solution I dropped

output {
<%= elasticsearch_output_config() %>
}

from /usr/share/logstash/modules/netflow/configuration/logstash/netflow.conf.erb file. I created a new filter with:

root@kibana:/etc/logstash/conf.d# cat filter-z.conf
filter {
    if [type] == "netflow" {
        mutate {
            add_field => { "[@metadata][index]" => "netflow" }
        }
    } else {
        mutate {
            add_field => { "[@metadata][index]" => "%{[@metadata][beat]}" }
        }
    }
}

and edited the output in a little bit:

root@kibana:/etc/logstash/conf.d# cat output-elasticsearch.conf
output {
  elasticsearch {
    hosts => ["http://127.0.0.1:9200"]
    manage_template => false
    #index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Thanks for sharing this!

I'll see if we can do something to make running with existing configs a little clearer in the docs.

1 Like

To close the loop - https://github.com/elastic/logstash/issues/8551

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.