Netflow.conn_id out of range?


(Diogo Assumpcao) #1

Hi!

After running Logstash with the Netflow module for a little less than two weeks, I started getting the message bellow in my logs:

[2017-12-27T20:33:49,547][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"netflow-2017.12.27", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x781a020d>], :response=>{"index"=>{"_index"=>"netflow-2017.12.27", "_type"=>"doc", "_id"=>"zlaumWABILE779PdXcPq", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.conn_id]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (2453470754) out of range of int\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@6d5ef212; line: 1, column: 609]"}}}}}

The flow information Logstash is processing is coming from a Cisco ASA.

I tried editing the netwflow.json under logstash-6.1.1/modules/netflow/configuration/elasticsearch and running logstash with setup again, and I could confirm that Elasticsearch template was updated with curl localhost:9200/_template/netflow, but after restarting Elasticsearch and logstash the issue persisted.

Any ideas?

Thanks!


(Magnus B├Ąck) #2

You'll have to use an index template to force the netflow.conn_id field as a long, then reindex the data to a new index where said field actually is a long field.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.