Index per logstash instance?

Hi there,

I've been trying out ELK to collect some Netflow data. I have two LS instances talking to one ES instance, but both LS instances are configured to output to the same ES index name.

Needless to say it didn't properly work until I made the two index names unique. Is it expected that the index names be unique or can they be the same? I've had a hard time trying to find the appropriate search language to find some documentation to review on that question myself.

Thanks!

Is it expected that the index names be unique or can they be the same

They can certainly be the same. What problems did you encounter?

I got a whole bunch of negative feedback from my logstash logs:

[2018-04-27T14:07:03,215][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"elastiflow-2018.04.2
7", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3514da47>], :response=>{"index"=>{"_index"=>"elastiflow-2018.04.27", "_type"=>"doc", "_id"=>"SPlJCGMBQmxOb2YYedF8", "status"=
>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [flow.ip_protocol]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"U
DP\""}}}}}
[2018-04-27T14:07:03,215][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"elastiflow-2018.04.2
7", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x159805c6>], :response=>{"index"=>{"_index"=>"elastiflow-2018.04.27", "_type"=>"doc", "_id"=>"SflJCGMBQmxOb2YYedF8", "status"=
>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [flow.ip_protocol]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"T
CP\""}}}}}
[2018-04-27T14:07:03,216][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"elastiflow-2018.04.2
7", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x570eb7a7>], :response=>{"index"=>{"_index"=>"elastiflow-2018.04.27", "_type"=>"doc", "_id"=>"SvlJCGMBQmxOb2YYedF8", "status"=
>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [flow.ip_protocol]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"T
CP\""}}}}}

That's a completely different problem. The flow.ip_protocol field has apparently been mapped as a number, yet the events you're trying to index has a string in that field. Decide on what the correct data type is and adjust the data after that. You may want to use an index template to explicitly set the mappings of the field in question (and other fields) to make sure you always get what you expect.

That's interesting. I presumed it was related to index names, because when I made this name unique, the errors stopped.

This is all a pre-packaged series of configs that are provided by the Elastiflow folks, so I'm hesitant to make adjustments to anything, because it did work just fine out of the box until I added another LS instance.

I presumed it was related to index names, because when I made this name unique, the errors stopped.

Well, that's expected if you have one instance that sends flow.ip_protocol as a number and one that sends it as a string. That doesn't mean two Logstash instances can't share an index, but if they do they have to be aligned in terms of the mappings of the fields.

This is all a pre-packaged series of configs that are provided by the Elastiflow folks, so I'm hesitant to make adjustments to anything, because it did work just fine out of the box until I added another LS instance.

Sounds like it would be incumbent upon the Elastiflow folks to help out then.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.