Logstash-output-elasticsearch for netflow

I've been having this issue fora while now, I've posted on github but the problem is not the netflow codec

jorritfolmer has explained why and given me some insight but I thought I ask here for a bit more help

Blockquote
2018-04-03T10:12:48,021][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.03", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x467c29d1], :response=>{"index"=>{"_index"=>"logstash-2018.04.03", "_type"=>"doc", "_id"=>"emzZiGIBVDzVCvaTWDoP", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.event_time_msec]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (16787034570129189063) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@5783d39f; line: 1, column: 224]"}}}}}

It looks like that number is outside if the bounds of what Elasticsearch can store as a precise value, and Elasticsearch is rejecting the request to store the document.

While a type of a field cannot be changed on an existing index, you may be better off using a double, which can store larger numbers at the cost of precision. EDIT: further discovery showed that this number is wrong, not that the type should be changed.

A milisecond-level granularity timestamp of 16787034570129189063 is roughly 500 million years in the future. Something is fishy here.

Thank you for your reply.
I need to narrow it down to a device, but I have a feeling it's the Cisco ASA. Do you have anything to suggest?
Should I check the time and zone on the ASA?

When I removed seven decimal places, it looks like a relatively appropriate milliseconds-since-epoch value (March of 2023, just five years in the future?).

That's helpful, thank you.
I am just using @timestamp, nothing fancy that I know of. Don't have access to the firewall so I can't check the date there

Blockquote [2018-04-03T16:42:46,107][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.04.03", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x28d91fe6], :response=>{"index"=>{"_index"=>"logstash-2018.04.03", "_type"=>"doc", "_id"=>"WRQ-imIBuAdp7i0vXvbJ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.event_time_msec]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (16520087337464545448) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@6b38de4d; line: 1, column: 383]"}}}}}
$ date (on the server)
Tuesday 3 April 16:57:21 AEST 2018

I have tried deleting the template and adding one with netflow.event_time _msec and a type of long, but didn't really work. The type still says number, this is just the default logstash template. This is before you suggested to use a double, would I add it in the same spot - the logstash template?

I might set up ip tables to only accept netflow from the ASA just to narrow down to see if that's where the problem lies, do you think?

The netflow data is coming in from the ASA, and other devices, but this warning must mean some data might be missing?

I have revised my comment about changing the field's type -- it's the value in this particular document that is wrong, not the underlying type in Elasticsearch.

  • How is [netflow.event_time_msec] populated? (e.g., is there anything in your logstash configuration that explicitly touches the field? Is there anything in your netflow configuration that explicitly sets the event's timestamp?)
  • Is March of 2023 (a milisecond-granularity timestamp roughly five years in the future) an appropriate value for an event?

When encountering a wildly-off timestamp, what is your desired behaviour? dropping the event?

I've tried searching for the value when I get status 400 and the exception and here's what it looks like, I've only removed the geoip output and the real outside ip
Looks like it's not the ASA but the 2901 Router causing it, maybe an ACL causing a packet to be malformed, don't know :blush:

Blockquote [2018-04-04T08:13:25,292][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"netflow-2018.04.03", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x54630c5d], :response=>{"index"=>{"_index"=>"netflow-2018.04.03", "_type"=>"doc", "_id"=>"JSWSjWIBuAdp7i0vaC-j", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [netflow.flow_start_msec]", "caused_by"=>{"type"=>"json_parse_exception", "reason"=>"Numeric value (14340042958798061568) out of range of long (-9223372036854775808 - 9223372036854775807)\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@66e2e84; line: 1, column: 309]"}}}}}

root# cat netflow.log | grep 14340042958798061568

Blockquote
{"syslog_severity":"notice","netflow":{"icmp_code":0,"flowset_id":257,"ipv4_dst_addr":"some_ip_address","ingress_acl_id":"00007d3b-59d07d3b-59b40000","egress_acl_id":"08b90000-00090002-0005c0a8","xlate_dst_addr_ipv4":"2.192.168.101","xlate_src_port":14784,"l4_dst_port":0,"flow_start_msec":14340042958798061568,"ipv4_src_addr":"0.0.0.0","flow_seq_num":3324889,"fw_ext_event":1536,"username":"ƒAÀ¨e9\u0006\u0000\u0017e
+\u0000\u0000À¨Ç\u0006\u0018\u0011\u001B\u0001\u0000\u0000\u0000\u0000};%}:þÈ\u0000\u0000\u0003m\u0000\u0000\u0000\u0006\u0000\u0002\u0000\u0005À¨ƒ\u0006À¨e\u0006\u0006\u0002ñ¡\u0000\u0000À¨Ç","input_snmp":22992,"conn_id":403773953,"version":9,"protocol":0,"icmp_type":0,"xlate_src_addr_ipv4":"13.0.5.0","xlate_dst_port":43139,"event_time_msec":9595789153602158760,"fw_event":65,"l4_src_port":32059,"output_snmp":2387},"syslog_severity_code":5,"syslog_facility":"user-level","type":"netflow","tags":["netflow","Cisco 2901 Router","GeoIP-DST","_geoip_lookup_failure","netflow-message"],"host":"192.168.199.1","@timestamp":"2018-04-03T22:13:36.000Z","syslog_facility_code":1}

In the logstash config this is what I have

filter {
if [type] == "netflow" {
grok {
match => {
"message" => "%{GREEDYDATA}" }

    add_tag => [ "netflow-message" ]
    remove_field => [ "@version" ]
    } #grok

    syslog_pri {
    syslog_pri_field_name => "syslog5424_pri"
    }

    } #syslog
    } #filter

Thanks for that

I'd like to know what's causing it. If I'm only ingesting logs from the router I don't see the warning. When I add the firewall in the mix, ingesting on the same udp port the warnings start popping up in the log file.

You may be interested in enabling the dead-letter-queue. It's a feature that gives Logstash somewhere to put events that can't be delivered so that you can dig through them later without holding up your pipeline.

Thanks yaauie, will try that. I have enabled the dead-letter-queue however since I still had the output going to a file instead of elasticsearch I didn't see any exceptions overnight.
I've changed that now, will have to wait and see what happens

This is weird, when I enable the dead letter queue and set it to a path, the exceptions stop showing up however the dead letter queue logs don't show anything
Soon as I disable dead_letter_queue I get them again.

I added the following to my logstash config

dead_letter_queue {
path => "/loggy/dead_letter_queue"
commit_offsets => true
pipeline_id => "main"
}

I've enabled the dead letter queue in logstash.yml
dead_letter_queue.enable: true

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.