Elasticsearch index issue with logstash-netflow in 5.x

I find elastcisearch is not creating the index based on the below logstash configuration . have installed netflow-codec through logstash-plugin .

logstash config:

[root@SERVER logstash]# cat /etc/logstash/conf.d/logstash.conf
input {
udp {
host => "6.6.1.8"
port => 7995
codec => netflow {
netflow_definitions => "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.1.2/lib/logstash/codecs/netflow/netflow.yaml"
versions => [9]
}
type => "netflow"
}
}
output {
stdout { codec => rubydebug }
elasticsearch { hosts => "localhost:9200"
index => "logstash_netflow-aci"
}
}

i am expecting index "logstash_netflow-aci" , but it is not created .

current indexes :

[root@SERVER projects]# curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open filebeat-2016.11.09 HaQPZXwBTAqF9SBw1inHVA 5 1 18 0 50.3kb 50.3kb
yellow open packetbeat-2016.11.09 UNWnQX57SiOvvYDig-wK9Q 5 1 302742 0 72.8mb 72.8mb
yellow open packetbeat-2016.11.04 I6L8v6mnSQSRCKfT1rUgsg 5 1 182967 0 47.4mb 47.4mb
yellow open packetbeat-2016.11.07 FUDGIRt0TtmnAfcazN1VrA 5 1 617096 0 155.1mb 155.1mb
yellow open packetbeat-2016.11.08 21PV_dgZTjuazEMojFhZTQ 5 1 622560 0 155.9mb 155.9mb
yellow open packetbeat-2016.11.06 opa0TFirRCmGShE6dMWxXw 5 1 612271 0 148.5mb 148.5mb
yellow open filebeat-2016.11.05 azluDE6aQHGuMJ0Frp_oOQ 5 1 1820 0 631.9kb 631.9kb
yellow open packetbeat-2016.11.05 sTXOiFcRS_Sx4-fx-9Pyyg 5 1 600593 0 151.3mb 151.3mb
yellow open .kibana IjH7gZFCTRqv_Yrbi0S7ZA 1 1 88 42 160.8kb 160.8kb

any throughts on debug/logs which i can look at it to isolate the issue ?

Is there anything in stdout?

here is the log . the Q i have is - it is installing template logstash-* . is this causing the issue ?

at org.jruby.Main.main(Main.java:197)

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties.
08:19:31.698 [[main]<udp] INFO logstash.inputs.udp - Starting UDP listener {:address=>"0.0.0.0:7995"}
08:19:31.905 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}
08:19:31.907 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
08:19:32.170 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
08:19:32.178 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200"]}
08:19:32.182 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>6000}
08:19:32.203 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
08:19:32.228 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

^^^^

If you start Logstash in the foreground instead of as a service with you configuration, do you see any data received trout the stdout plugin? If no traffic is being received, no index will be created.

yes , stdout few IPv6 packets .

{
"netflow" => {
"output_snmp" => 0,
"ipv6_dst_addr" => "ff02::16",
"in_pkts" => 8,
"ip_protocol_version" => 6,
"first_switched" => "2016-11-14T15:01:15.999Z",
"flowset_id" => 2048,
"l4_src_port" => 0,
"version" => 9,
"ipv6_src_addr" => "fe80::92e2:baff:feaf:d1c",
"flow_seq_num" => 799,
"in_bytes" => 608,
"protocol" => 58,
"last_switched" => "2016-11-14T15:02:50.999Z",
"input_snmp" => 0,
"tcp_flags" => 0,
"l4_dst_port" => 0
},
"@timestamp" => 2016-11-14T15:08:01.000Z,
"@version" => "1",
"host" => "127.0.0.1",
"type" => "netflow"
}
{
"netflow" => {
"output_snmp" => 0,
"ipv6_dst_addr" => "ff02::2",
"in_pkts" => 12,
"ip_protocol_version" => 6,
"first_switched" => "2016-11-14T15:01:15.999Z",
"flowset_id" => 2048,
"l4_src_port" => 0,
"version" => 9,
"ipv6_src_addr" => "fe80::92e2:baff:feaf:d1c",
"flow_seq_num" => 799,
"in_bytes" => 576,
"protocol" => 58,
"last_switched" => "2016-11-14T15:02:57.999Z",
"input_snmp" => 0,
"tcp_flags" => 0,
"l4_dst_port" => 0
},
"@timestamp" => 2016-11-14T15:08:01.000Z,
"@version" => "1",
"host" => "127.0.0.1",
"type" => "netflow"
}
{
"netflow" => {
"output_snmp" => 0,
"ipv6_dst_addr" => "ff02::2",
"in_pkts" => 12,
"ip_protocol_version" => 6,
"first_switched" => "2016-11-14T15:01:13.999Z",
"flowset_id" => 2048,
"l4_src_port" => 0,
"version" => 9,
"ipv6_src_addr" => "fe80::92e2:baff:feae:8e75",
"flow_seq_num" => 799,
"in_bytes" => 576,
"protocol" => 58,
"last_switched" => "2016-11-14T15:02:58.999Z",
"input_snmp" => 0,
"tcp_flags" => 0,
"l4_dst_port" => 0
},
"@timestamp" => 2016-11-14T15:08:01.000Z,
"@version" => "1",
"host" => "127.0.0.1",
"type" => "netflow"
}

the interface is receiving the V4 netflows as below , but v4 flows are not showing up in index .

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp6s0f0, link-type EN10MB (Ethernet), capture size 65535 bytes
11:28:51.728794 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 208
11:28:52.339066 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 1428
11:28:52.339185 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 1428
11:28:52.339316 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 1428
11:28:52.339427 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 1428
11:28:52.339512 IP 21.16.4.209.55865 > SERVER-8.7995: UDP, length 244

You have specified v9 in the input but are also receiving v4? Based on the netflow codec documentation it looks like it only supports v5 and v9. Does the behaviour change if you stop sending v4 events and restart it?

netflow version is 9 , IPv4 netflow packets are not indexed though received by the system .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.