Netflow Module Not Creating ES Index


When I launch logstash 6.5.4 with the netflow module, using the line:

bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=2055

it does not seem to create an ES index for Netflow as the docs suggest when I look in the Index Management on Kibana. The index is created for Kibana however.

My logstash.yml file:

  - name: netflow
    var.input.udp.port: 2055
    var.elasticsearch.hosts: "" ""
    var.elasticsearch.ssl.enabled: false
    var.kibana.scheme: http
    var.kibana.ssl.enabled: false
    var.kibana.ssl.verification_mode: disable

Is there an issue with my configuration?


Is Logstash receiving a flow? In Elasticsearch, an index is created the first time a document is written, so if no data is flowing yet it would make sense for there to be no index.

Hi Yaauie,

Thanks for getting back to me.

The Netflow Data should be flowing on our network, but may need to check if it is getting into the node that Logstash is hosted on. I wanted to confirm the following:

The server Logstash is on has 2 IPs, and for example. When I start logstash with above command, I receive this printout (partial printout):

[2019-05-09T11:04:50,925][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"module-netflow", :thread=>"#<Thread:0x305df264 run>"}
[2019-05-09T11:04:50,978][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>""}
[2019-05-09T11:04:51,008][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"module-netflow"], :non_running_pipelines=>[]}
[2019-05-09T11:04:51,056][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"", :receive_buffer_bytes=>"212992", :queue_size=>"2000"}
[2019-05-09T11:04:51,359][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I wanted to confirm that Logstash, with the Netflow module should be listening on those 2 IP addresses on Port 2055 correct, due to the address property? If this set up is correct, then it must be the flow of data not making it through.


In the context of servers, can mean "all IPv4 addresses on the local machine". If a host has two IP addresses, and, and a server running on the host is configured to listen on, it will be reachable at both of those IP addresses.


Hi Yaauie,

Thanks for the info. Then it seems that perhaps it must be the data being blocked with a Firewall or just not making it through to the Node.


You can use "netstat -an | grep 2055" (or "netstat -an | findstr 2055" on Windows) to see what addresses it is listening on.

I once worked on an operating system for which the TCP stack would not bind to addresses in 192.168/16 and 10/8 when binding to On such a system you would have explicitly bind to an address.

In that case, do you know what file and variable I need to modify for the IP? I have tried it with the logstash.yml file using the variable "", but the UDP listener always listens on

Apparently the module does not allow you to configure the host.

I also realize that the module IP is not configurable, only the port it seems. However, it seems that this should be a logstash input variable logstash.inputs.udp. I am just not sure where I need to configure this.

[2019-05-10T13:05:36,861][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>""}
[2019-05-10T13:05:36,880][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"module-netflow"], :non_running_pipelines=>[]}
[2019-05-10T13:05:36,941][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"", :receive_buffer_bytes=>"212992", :queue_size=>"2000"}
[2019-05-10T13:05:37,246][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

If I wanted to bind the IP of Logstash itself (excluding module), is it not that will allow it to listen on the specified IP?



Just an update, I was able to resolve this issue. Turns out it was not the binding issue but the Linux Firewall preventing packets from entering through a certain port. My original configuration was non-problematic.

Thanks for all the help!


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.