How can I get the index name for netflow to be set correctly upon creation?
Some background:
Prior to using Netflow, I've been using Logstash to send beats data into Elasticsearch - Filebeat, Auditbeat, Metricbeat, etc - with no problem.
When I went to use Netflow, since it's a module, I had it configured in the logstash.yml file. And that worked just fine once I got Logstash to load the netflow template and dashboards. The problem there was that the data from the beats stopping coming in, which I discovered was due to it skipping over the config files in /etc/logstash/conf.d/ and only listening on the Netflow UDP port, which is one number higher than the beats logstash port. Which got me thinking about multiple pipelines.
So then I started down the multiple pipelines road. But I couldn't find any documentation about getting modules and pipelines to work together. I never did find any official documentation, but I did find this post which describes getting the netflow module to work with the rest of the conf files in the /etc/logstash/conf.d/ area.
The only difference I made from that above post was to comment out the "output" section from the 01-netflow-input.conf file and just send it through the the output the rest of the beats were/are going through which is 99-output-elasticsearch.conf:
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
This sort of works as it does allow for the beats data to go to their respective indexes (on a TCP port), but the netflow data (coming in on a UDP port) ends up in an index named "%{[@metadata][beat]}-%{[@metadata][version]}-2019.06.04" - it's picking up the date format at the end, but not the name and version.
I suspect this is due to Netflow not being a "beat" but rather a Logstash module.
Then I thought about adding the output data back to the end of the netflow.conf file
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "netflow-%{+YYYY.MM.dd}" }
stdout { codec => rubydebug }
}
but when I do THAT I get the following error:
[2019-06-04T10:44:25,414][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"netflow-2019.06.04", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x2f6e36e3], :response=>{"index"=>{"_index"=>"netflow-2019.06.04", "_type"=>"_doc", "_id"=>"_OvyImsBHEmPc_lPyJnn", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [keyword] in document with id '_OvyImsBHEmPc_lPyJnn'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:1372"}}}}}
Some of the data still makes it into the index though.
So:
- Netflow module config in logstash.yml = Netflow data, but no beats data
- Netflow config in 01-netflow-input.conf = Netflow data and beats data but netflow index isn't named correctly
- Netflow config in 01-netflow-input.conf WITH output section = only some of the data and the rest generates a field parsing error
I feel like I'm close to getting this to work, but I'm missing something silly. It's seems it's just the index naming issue from Logstash that's holding me up.
It could also be I've been staring at the config files too long.
Anyone run into something like this before?
Many thanks in advance!