Netflow data not appears in Elastic search/Kibana

Hello,
I Countered a problem when using Filebeat to forward Netflow data into Elasticsearch,
I'm using filebeat 8.5.2 installed on Centos Stream 9, to send netflow data to Elastic Stack 8.5.2.

The problem is, why the Kibana cannot show any data?

Please take a look at the filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:
- type: netflow
  max_message_size: 10KiB
  host: "0.0.0.0:2055"
  protocols: [ v5, v9, ipfix ]
  expiration_timeout: 30m
  queue_size: 8192
#  custom_definitions:
#  - path/to/fields.yml
  detect_sequence_reset: true

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.name: "netflow"
setup.template.pattern: "netflow-*"
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["10.34.1.152:9200"]
  index: "netflow-%{+yyyy.MM.dd}"
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "********************"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

and here is the output of "filebeat -e" command:

{"log.level":"info","@timestamp":"2023-02-08T05:31:41.142Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":116813824}}}},"cpu":{"system":{"ticks":410,"time":{"ms":40}},"total":{"ticks":3950,"time":{"ms":440},"value":3950},"user":{"ticks":3540,"time":{"ms":400}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":153091},"version":"8.5.2"},"memstats":{"gc_next":32065384,"memory_alloc":22771232,"memory_total":617366856,"rss":102723584},"runtime":{"goroutines":31}},"filebeat":{"events":{"added":3062,"done":3062},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":3062,"packets":{"received":161}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3062,"active":0,"batches":66,"total":3062},"read":{"bytes":42375},"write":{"bytes":5715024}},"pipeline":{"clients":1,"events":{"active":0,"published":3062,"total":3062},"queue":{"acked":3062}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.33,"15":3.03,"5":3.05,"norm":{"1":0.4163,"15":0.3787,"5":0.3813}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:32:11.143Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":114089984}}}},"cpu":{"system":{"ticks":470,"time":{"ms":60}},"total":{"ticks":4380,"time":{"ms":430},"value":4380},"user":{"ticks":3910,"time":{"ms":370}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":183090},"version":"8.5.2"},"memstats":{"gc_next":29672392,"memory_alloc":20155544,"memory_total":682659104,"rss":99651584},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":112,"added":2883,"done":2771},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2883,"packets":{"received":157}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2771,"active":0,"batches":60,"total":2771},"read":{"bytes":38381},"write":{"bytes":5200334}},"pipeline":{"clients":1,"events":{"active":112,"published":2883,"total":2883},"queue":{"acked":2771}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.56,"15":3.05,"5":3.11,"norm":{"1":0.445,"15":0.3813,"5":0.3888}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:32:41.142Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":109764608}}}},"cpu":{"system":{"ticks":530,"time":{"ms":60}},"total":{"ticks":4800,"time":{"ms":420},"value":4800},"user":{"ticks":4270,"time":{"ms":360}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":213089},"version":"8.5.2"},"memstats":{"gc_next":28889400,"memory_alloc":15134560,"memory_total":744178144,"rss":96215040},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":-112,"added":2671,"done":2783},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2671,"packets":{"received":139}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2783,"active":0,"batches":60,"total":2783},"read":{"bytes":38464},"write":{"bytes":5182605}},"pipeline":{"clients":1,"events":{"active":0,"published":2671,"total":2671},"queue":{"acked":2783}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.45,"15":3.06,"5":3.13,"norm":{"1":0.4313,"15":0.3825,"5":0.3913}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:33:11.142Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":121630720}}}},"cpu":{"system":{"ticks":610,"time":{"ms":80}},"total":{"ticks":5240,"time":{"ms":440},"value":5240},"user":{"ticks":4630,"time":{"ms":360}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":243086},"version":"8.5.2"},"memstats":{"gc_next":44137176,"memory_alloc":27451248,"memory_total":810660712,"rss":107266048},"runtime":{"goroutines":31}},"filebeat":{"events":{"added":2916,"done":2916},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2916,"packets":{"received":156}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2916,"active":0,"batches":63,"total":2916},"read":{"bytes":40343},"write":{"bytes":5454549}},"pipeline":{"clients":1,"events":{"active":0,"published":2916,"total":2916},"queue":{"acked":2916}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.73,"15":3.1,"5":3.23,"norm":{"1":0.4663,"15":0.3875,"5":0.4038}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:33:41.142Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":116092928}}}},"cpu":{"system":{"ticks":640,"time":{"ms":30}},"total":{"ticks":5650,"time":{"ms":410},"value":5650},"user":{"ticks":5010,"time":{"ms":380}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":273091},"version":"8.5.2"},"memstats":{"gc_next":31492840,"memory_alloc":17591552,"memory_total":873198352,"rss":102023168},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":56,"added":2745,"done":2689},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2745,"packets":{"received":146}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2689,"active":0,"batches":59,"total":2689},"read":{"bytes":37481},"write":{"bytes":5030429}},"pipeline":{"clients":1,"events":{"active":56,"published":2745,"total":2745},"queue":{"acked":2689}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.11,"15":3.07,"5":3.13,"norm":{"1":0.3888,"15":0.3838,"5":0.3913}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:34:11.140Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":116240384}}}},"cpu":{"system":{"ticks":690,"time":{"ms":50}},"total":{"ticks":6050,"time":{"ms":400},"value":6050},"user":{"ticks":5360,"time":{"ms":350}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":303087},"version":"8.5.2"},"memstats":{"gc_next":39762648,"memory_alloc":27977568,"memory_total":935504992,"rss":102121472},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":-56,"added":2726,"done":2782},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2726,"packets":{"received":138}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2782,"active":0,"batches":60,"total":2782},"read":{"bytes":38408},"write":{"bytes":5162850}},"pipeline":{"clients":1,"events":{"active":0,"published":2726,"total":2726},"queue":{"acked":2782}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.01,"15":3.06,"5":3.1,"norm":{"1":0.3762,"15":0.3825,"5":0.3875}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:34:41.141Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":111034368}}}},"cpu":{"system":{"ticks":750,"time":{"ms":60}},"total":{"ticks":6420,"time":{"ms":370},"value":6420},"user":{"ticks":5670,"time":{"ms":310}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":333088},"version":"8.5.2"},"memstats":{"gc_next":27136456,"memory_alloc":19937512,"memory_total":993811712,"rss":97153024},"runtime":{"goroutines":31}},"filebeat":{"events":{"added":2539,"done":2539},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":2539,"packets":{"received":133}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2539,"active":0,"batches":57,"total":2539},"read":{"bytes":35874},"write":{"bytes":4722852}},"pipeline":{"clients":1,"events":{"active":0,"published":2539,"total":2539},"queue":{"acked":2539}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3,"15":3.06,"5":3.09,"norm":{"1":0.375,"15":0.3825,"5":0.3863}}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-02-08T05:35:11.142Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":186},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cgroup":{"memory":{"mem":{"usage":{"bytes":123092992}}}},"cpu":{"system":{"ticks":830,"time":{"ms":80}},"total":{"ticks":6940,"time":{"ms":520},"value":6940},"user":{"ticks":6110,"time":{"ms":440}}},"handles":{"limit":{"hard":524288,"soft":1024},"open":12},"info":{"ephemeral_id":"adb31966-83c3-40a9-8af3-6068ccdbeaa4","uptime":{"ms":363086},"version":"8.5.2"},"memstats":{"gc_next":43827832,"memory_alloc":39425112,"memory_total":1073678400,"rss":109223936},"runtime":{"goroutines":31}},"filebeat":{"events":{"active":72,"added":3518,"done":3446},"harvester":{"open_files":0,"running":0},"input":{"netflow":{"flows":3518,"packets":{"received":195}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3446,"active":0,"batches":75,"total":3446},"read":{"bytes":47938},"write":{"bytes":6491280}},"pipeline":{"clients":1,"events":{"active":72,"published":3518,"total":3518},"queue":{"acked":3446}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.01,"15":3.05,"5":3.08,"norm":{"1":0.3762,"15":0.3813,"5":0.385}}}},"ecs.version":"1.6.0"}}

The data from Netflow are received by the filebeat machine...

is there anything I missed?
Thanks in advance...

The netflow input uses UDP, you can't use telnet to test UDP, only TCP.

That's why you got connection refused while testing with telnet.

You may use netcat/nc to test the connectivity with the udp port, something like this:

nc -z -v -u -w3 IP 2055

And the response should be something like this:

Connection to IP 2055 port [udp/*] succeeded!

From what you shared Filebeat is listening on port 2055/UDP, test with nc to see if it is some network issue.

Also, do you have anything in Filebeat logs?

Hi @leandrojmp!
Thanks for your reply,
as you stated, its true that I shout use nc instead of telnet to test udp connection:
image
And I was wrong to blame the 2055 port.

And there is must exist another problem. Assuming that netflow traffic are captured and sent by filebeat into the Elastic Stack server using the configuration I quoted above,
Why the Kibana dashboard cannot show any of the Netflow data?

Edit: I will post the new-stated problem in another thread, because the recent problem is unrelated with this thread title
Thanks

Edit again: I changed the title of this Post, and removed the Solved tag. Sorry about that

Probably because you are not using the Filebeat module for Netflow, just the netflow input, those are different things.

The filebeat module uses ingest pipeline to parse the data, if you are using just the input, your data will not be parsed unless you manually use processors to parse it.

Check the documentation of the module and change to it.

Thanks for your reply, really appreciate it...
Here is the updated filebeat configuration based on your suggestions:

  1. Just enabled the netflow module
  2. Updated filebeat.yml
filebeat.inputs:

- type: filestream

  id: my-filestream-id

  enabled: false

  paths:
    - /var/log/*.log

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.name: "netflow"
setup.template.pattern: "netflow-*"
setup.template.settings:
  index.number_of_shards: 1

setup.kibana:

  host: "10.34.1.152:5601"

output.elasticsearch:

  hosts: ["10.34.1.152:9200"]
  index: "netflow-%{+yyyy.MM.dd}"

  username: "elastic"
  password: "*********"

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  1. /etc/filebeat/modules.d/netflow.yml configuration
- module: netflow
  log:
    enabled: true
    var:
      netflow_host: 0.0.0.0
      netflow_port: 2055

For additional information, I see netflow data on Kibana Index management, and there might be any data as but I cannot figure out how to use it.

Is there anything I can do so the data can be shown on the dashboard?

Hello, I think I figure it out how to use Index management. thank you for your help @leandrojmp

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.