Elastic Search have index file and i can add it to Kibana but is said no data

im using filebeat netflow module + Elasticsearch + kibana. I have succesfully collected netflow from my router cisco in GNS3 to filebeat and send it to Elasticsearch but when i use kibana i cant see any data.[quote="D_Nang_Kien, post:1, topic:365862, full:true"]
im using filebeat netflow module + Elasticsearch + kibana. I have succesfully collected netflow from my router cisco in GNS3 to filebeat and send it to Elasticsearch but when i use kibana i cant see any data.



Hello,

Could you please check if you are checking in right index...filebeat* ?? or netflow is the index name....Also check the time...last 24 hours filter if the data is not continuous ?

Thanks!!

1 Like

@D_Nang_Kien as said above, set the time picker to a much wider range like 30 days and then start to narrow in. There looks to be 290 documents so you just need to find them

yes im checking in right index

Yes, sometimes data does appear on Kibana, but mostly, no data shows up. I have checked the file index filebeat-*, and the data has been continuously updated. My router is also continuously sending NetFlow to port 2055, but when I check on Kibana, nothing appears. I don't know how to make the data display continuously on Kibana in real-time.
Here my filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input-specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream
 
  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
     - C:\Program Files\Elastic\Beats\8.15.0\filebeat\logs\*
     
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: true
# The URL from where to download the dashboard archive. By default, this URL
# has a value that is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "http://localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  
  # Performance preset - one of "balanced", "throughput", "scale",
  # "latency", or "custom".
  preset: balanced

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors, use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch outputs are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

setup.ilm.overwrite: true

my netflow.yml

# Module: netflow
# Docs: https://www.elastic.co/guide/en/beats/filebeat/main/filebeat-module-netflow.html


- module: netflow
  log:
    enabled: true

    var:
      # The UDP port to listen for NetFlow data.
      netflow_host: 0.0.0.0
      netflow_port: 2055

      # Maximum message size in bytes.
      #max_message_size: 10000

      # NetFlow protocol versions to support.
      #protocols: [v5, v9, ipfix]

      # Expiration timeout for templates in seconds.
      #expiration_timeout: 600s

      # Queue size to buffer NetFlow packets before processing.
      #queue_size: 8192

      # internal_networks specifies which networks are considered internal or private
      # you can specify either a CIDR block or any of the special named ranges listed
      # at: https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html#condition-network
      internal_networks:
        - private

my elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template alists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: C:\Users\donangkien\Downloads\GHP\Elastic Search\elasticsearch-8.15.0\data
#
# Path to log files:
#
path.logs: C:\Users\donangkien\Downloads\GHP\Elastic Search\elasticsearch-8.15.0\logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: localhost
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["localhost"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 31-08-2024 01:22:42
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: false

xpack.security.enrollment.enabled: false

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: false
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["DESKTOP-M0NNR28"]

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

my kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
elasticsearch.serviceAccountToken: "AAEAAWVsYXN0aWMva2liYW5hL215LXRva2VuOmUxYzJtV2RwU3RpTXd3anZQYWtSQ0E"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json

# Example with size based log rotation
#logging.appenders.default:
#  type: rolling-file
#  fileName: /var/logs/kibana.log
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

I'm using a netflow generator to test


here my index file after running the generator, it's increasing but no data show up in kibana

@D_Nang_Kien

Please do not share text as screen shots it is very hard to work with...

Most likely, you are writing data without a timezone, so the data is in the "future" ... you will need to account for that

All Data is stored in UTC in Elastic.
If you send data from your timezone and do not provide the timezone the data will be captured and stored as UTC...

What timezone are you in?

Rerun the generator.

Go to the Time Picker...and set exactly this and show me what you see....

and set the Time picker to precisely this. This looks into the Past and "Future"

Also run this in dev

POST /filebeat-*/_search
{
  "size": 0,
  "aggs": {
    "count": {
      "value_count": {
        "field": "@timestamp"
      }
    },
    "min_timestamp": {
      "min": {
        "field": "@timestamp"
      }
    },
    "max_timestamp": {
      "max": {
        "field": "@timestamp"
      }
    }
  }
}

Share the command and results in text, not a screenshot.

Please do not share text as screenshots

is this the correct time in your timezone...

Perhaps look at these 2 to fix the time stamp

Or tell your generator to add timezone

Or got to Kibana -> Stack Managment -> Advance Settings and set the Timezone to UTC...

Ohh and @D_Nang_Kien Welcome to the community.. thanks for joking!

Thank you. Here the Timepicker:

Here is the result after i ran the code POST /metricbeat-*/_search in dev

{
 "took": 0,
 "timed_out": false,
 "_shards": {
   "total": 0,
   "successful": 0,
   "skipped": 0,
   "failed": 0
 },
 "hits": {
   "total": {
     "value": 0,
     "relation": "eq"
   },
   "max_score": 0,
   "hits": []
 }
}

Here is the result after i ran the code POST /filebeat-*/_search in dev

{
  "took": 31,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 216,
      "relation": "eq"
    },
    "max_score": null,
    "hits": []
  },
  "aggregations": {
    "count": {
      "value": 216
    },
    "min_timestamp": {
      "value": 1725121688000,
      "value_as_string": "2024-08-31T16:28:08.000Z"
    },
    "max_timestamp": {
      "value": 1725362119999,
      "value_as_string": "2024-09-03T11:15:19.999Z"
    }
  }
}

Here the filebeat-* index file today:

    "took": 6,
    "timed_out": false,
    "_shards": {
        "total": 1,
        "successful": 1,
        "skipped": 0,
        "failed": 0
    },
    "hits": {
        "total": {
            "value": 216,
            "relation": "eq"
        },
        "max_score": 1,
        "hits": [
            {
                "_index": ".ds-filebeat-8.15.0-2024.08.31-000001",
                "_id": "66cYtpEBhkQELNUCSIdU",
                "_score": 1,
                "_source": {
                    "@timestamp": "2024-09-03T11:15:19.999Z",
                    "netflow": {
                        "destination_ipv4_prefix_length": 0,
                        "octet_delta_count": 1980,
                        "source_ipv4_prefix_length": 0,
                        "source_transport_port": 65380,
                        "type": "netflow_flow",
                        "ip_next_hop_ipv4_address": "0.0.0.0",
                        "source_ipv4_address": "192.168.21.1",
                        "flow_end_sys_up_time": 2472388,
                        "destination_transport_port": 1900,
                        "flow_start_sys_up_time": 2433984,
                        "egress_interface": 0,
                        "exporter": {
                            "address": "192.168.1.1:54273",
                            "engine_type": 0,
                            "engine_id": 0,
                            "sampling_interval": 0,
                            "version": 5,
                            "timestamp": "2024-09-03T11:15:19.999Z",
                            "uptime_millis": 2498624
                        },
                        "protocol_identifier": 17,
                        "bgp_destination_as_number": 0,
                        "tcp_control_bits": 16,
                        "bgp_source_as_number": 0,
                        "ingress_interface": 2,
                        "ip_class_of_service": 0,
                        "packet_delta_count": 12,
                        "destination_ipv4_address": "239.255.255.250"
                    },
                    "event": {
                        "kind": "event",
                        "category": [
                            "network"
                        ],
                        "action": "netflow_flow",
                        "type": [
                            "connection"
                        ],
                        "start": "2024-09-03T11:14:15.359Z",
                        "end": "2024-09-03T11:14:53.763Z",
                        "duration": 38404000000,
                        "created": "2024-09-03T04:15:20.369Z"
                    },
                    "observer": {
                        "ip": "192.168.1.1"
                    },
                    "destination": {
                        "locality": "external",
                        "port": 1900,
                        "ip": "239.255.255.250"
                    },
                    "network": {
                        "iana_number": 17,
                        "bytes": 1980,
                        "packets": 12,
                        "direction": "unknown",
                        "community_id": "1:vZoex4cPj0WFYJmlyOg0CoMFKjw=",
                        "transport": "udp"
                    },
                    "agent": {
                        "version": "8.15.0",
                        "ephemeral_id": "637cb580-94a2-48cc-bd0c-7da84522aa69",
                        "id": "7c89e55e-ded4-4f98-963d-f6c5c6a24eea",
                        "name": "DESKTOP-M0NNR28",
                        "type": "filebeat"
                    },
                    "related": {
                        "ip": [
                            "192.168.21.1",
                            "239.255.255.250"
                        ]
                    },
                    "flow": {
                        "id": "0SkrHjLImG4",
                        "locality": "external"
                    },
                    "source": {
                        "ip": "192.168.21.1",
                        "locality": "internal",
                        "port": 65380,
                        "bytes": 1980,
                        "packets": 12
                    },
                    "input": {
                        "type": "netflow"
                    },
                    "ecs": {
                        "version": "8.0.0"
                    },
                    "host": {
                        "os": {
                            "name": "Windows 10 Pro",
                            "kernel": "10.0.19041.1415 (WinBuild.160101.0800)",
                            "build": "19041.1415",
                            "type": "windows",
                            "platform": "windows",
                            "version": "10.0",
                            "family": "windows"
                        },
                        "id": "24b158fb-bf29-4640-89a7-3bd1654233e5",
                        "ip": [
                            "fe80::e879:4266:23df:ee6d",
                            "192.168.1.2"
                        ],
                        "mac": [
                            "00-0C-29-A4-89-23"
                        ],
                        "hostname": "desktop-m0nnr28",
                        "name": "desktop-m0nnr28",
                        "architecture": "x86_64"
                    }
                }
            },
            {
                "_index": ".ds-filebeat-8.15.0-2024.08.31-000001",
                "_id": "7KcYtpEBhkQELNUCSIdU",
                "_score": 1,
                "_source": {
                    "@timestamp": "2024-09-03T11:15:19.999Z",
                    "network": {
                        "transport": "udp",
                        "iana_number": 17,
                        "bytes": 472,
                        "packets": 4,
                        "direction": "unknown",
                        "community_id": "1:jqg5p5bQWnFXIkxujIqnt+GfSPk="
                    },
                    "netflow": {
                        "bgp_source_as_number": 0,
                        "bgp_destination_as_number": 0,
                        "type": "netflow_flow",
                        "protocol_identifier": 17,
                        "destination_ipv4_address": "239.255.255.250",
                        "source_ipv4_prefix_length": 0,
                        "tcp_control_bits": 16,
                        "flow_end_sys_up_time": 2477344,
                        "egress_interface": 0,
                        "packet_delta_count": 4,
                        "ip_next_hop_ipv4_address": "0.0.0.0",
                        "ingress_interface": 2,
                        "flow_start_sys_up_time": 2456296,
                        "octet_delta_count": 472,
                        "destination_ipv4_prefix_length": 0,
                        "source_ipv4_address": "192.168.21.1",
                        "source_transport_port": 49679,
                        "destination_transport_port": 1900,
                        "ip_class_of_service": 0,
                        "exporter": {
                            "version": 5,
                            "timestamp": "2024-09-03T11:15:19.999Z",
                            "uptime_millis": 2498624,
                            "address": "192.168.1.1:54273",
                            "engine_type": 0,
                            "engine_id": 0,
                            "sampling_interval": 0
                        }
                    },
                    "observer": {
                        "ip": "192.168.1.1"
                    },
                    "input": {
                        "type": "netflow"
                    },
                    "agent": {
                        "version": "8.15.0",
                        "ephemeral_id": "637cb580-94a2-48cc-bd0c-7da84522aa69",
                        "id": "7c89e55e-ded4-4f98-963d-f6c5c6a24eea",
                        "name": "DESKTOP-M0NNR28",
                        "type": "filebeat"
                    },
                    "source": {
                        "locality": "internal",
                        "port": 49679,
                        "bytes": 472,
                        "packets": 4,
                        "ip": "192.168.21.1"
                    },
                    "destination": {
                        "ip": "239.255.255.250",
                        "locality": "external",
                        "port": 1900
                    },
                    "related": {
                        "ip": [
                            "192.168.21.1",
                            "239.255.255.250"
                        ]
                    },
                    "event": {
                        "type": [
                            "connection"
                        ],
                        "start": "2024-09-03T11:14:37.671Z",
                        "end": "2024-09-03T11:14:58.719Z",
                        "duration": 21048000000,
                        "created": "2024-09-03T04:15:20.369Z",
                        "kind": "event",
                        "category": [
                            "network"
                        ],
                        "action": "netflow_flow"
                    },
                    "flow": {
                        "id": "EyU-YyjZxHw",
                        "locality": "external"
                    },
                    "ecs": {
                        "version": "8.0.0"
                    },
                    "host": {
                        "ip": [
                            "fe80::e879:4266:23df:ee6d",
                            "192.168.1.2"
                        ],
                        "mac": [
                            "00-0C-29-A4-89-23"
                        ],
                        "hostname": "desktop-m0nnr28",
                        "architecture": "x86_64",
                        "os": {
                            "family": "windows",
                            "name": "Windows 10 Pro",
                            "kernel": "10.0.19041.1415 (WinBuild.160101.0800)",
                            "build": "19041.1415",
                            "type": "windows",
                            "platform": "windows",
                            "version": "10.0"
                        },
                        "name": "desktop-m0nnr28",
                        "id": "24b158fb-bf29-4640-89a7-3bd1654233e5"
                    }
                }
            },

also i'm using win 10 in wmware to connect to router cisco 7200 in gns3 and i've setup ntp server to sync time with the router and win 10. In the router im sending netflow v5 to port 2055
Here is Wireshark:

My timezone:
image

Thank you and sorry my english not good and im new with Elasticsearch

So your tool is not adding the timezone so you need to add it...

As I mentioned above you will need to add it...

Add these to the list above in your filebeat.yml

I think this is correct ... follow the documents I provided above you will need to test...

  - add_locale: ~
  - timestamp:
      field: @timestamp
      layouts:
        - '2006-01-02T15:04:05.999Z'
      test:
        - '2024-09-03T11:15:19.999Z'
      timezone: Local