Packetbeat on windows 10 : error connecting to kibana, fail to get the kibana version : HTTP GET request to

hey everybody, so i do have a problem with packetbeat in my windows 10 VM, so when i execute the command is powershell as an administrator , .\packetbeat.exe setup -e , the error i get is :
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to fails: fail to execute the HTTP GET request: Get "": dial tcp connectex: Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée.. Response: .

i already tested if i can access to kibana from the machine , and i can join it , so please anyone to help and thanks in advance .

Hi Tarik,

please to check if in Kibana config the ssl is enabled or not .

server.ssl.enabled: true or false

i've checked it and i've found it false !! any suggestions !!

That's looks like a connection issue... Perhaps either A) firewall issue blocking connectivity or B)
Kibana is not bound to the network.

Test A) from the machine packetbeat is running on. From the command line. You said you tested... How?

curl <!--- EDIT Fixed Port Number

B) share your kibana.yml

test A from the client "windows" , i've already disabled firewall, and when i go the browser and search for the @ , i can join the kibana interface


Test B

# For more configuration options see the configuration guide for Kibana in

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address. ""

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes. "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
      type: file
      fileName: /var/log/kibana/kibana.log
        type: json
      - default
      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#data.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#data.autocomplete.valueSuggestions.terminateAfter: 100000

# This section was automatically generated during setup.
elasticsearch.hosts: ['']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2NTA4MzkxNTAxMDk6WFFTNnBtSGFSQVMxZ0ZuT2F0LWNVZw
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1650839151241.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: [''], ca_trusted_fingerprint: 6238a066b437dcf043b383230a9c9fb20051f132a183d1a4597c0ae3458a8204}]

xpack.encryptedSavedObjects.encryptionKey: d27c13b668d22a94f752425bc075723f
xpack.reporting.encryptionKey: 09725ca29a11a568176649e867520502 8a2fdf0784d704ff3cc6301e4ea30cef

Share your packetbeat configuration.. did you include authorization / credentials in the Elasticsearch output section?

401 is Failed authentication... Try again with the credentials with the -u option
Those would me the Elasticsearch credentials.

curl -u "user:password"

this the packetbeat file

#################### Packetbeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The packetbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
# You can find the full configuration reference here:

# =============================== Network device ===============================

# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: 4

# The network CIDR blocks that are considered "internal" networks for
# the purpose of network perimeter boundary classification. The valid
# values for internal_networks are the same as those that can be used
# with processor network conditions.
# For a list of available values see:
  - private

# =================================== Flows ====================================

# Set `enabled: false` or comment out all options to disable flows reporting.
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s

# =========================== Transaction protocols ============================

- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. The default is true.
  enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  ports: [5672]

- type: cassandra
  # Configure the ports where to listen for Cassandra traffic. You can disable
  # the Cassandra protocol by commenting out the list of ports.
  ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  ports: [80, 8080, 8000, 5000, 8002]

- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
    - 443   # HTTPS
    - 993   # IMAPS
    - 995   # POP3S
    - 5223  # XMPP over SSL
    - 8443
    - 8883  # Secure MQTT
    - 9243  # Elasticsearch

- type: sip
  # Configure the ports where to listen for SIP traffic. You can disable
  # the SIP protocol by commenting out the list of ports.
  ports: [5060]

# ======================= Elasticsearch template setting =======================

  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.

# A list of tags to include in every event. In the default configuration file
# the forwarded tag causes Packetbeat to not add any host fields. If you are
# monitoring a network tap or mirror port then add the forwarded tag.
#tags: [forwarded]

# Optional fields that you can specify to add additional information to the
# output.
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the
# website.

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.

# =============================== Elastic Cloud ================================

# These settings simplify using Packetbeat with the Elastic Cloud (

# The setting overwrites the `output.elasticsearch.hosts` and
# `` options.
# You can find the `` in the Elastic Cloud web UI.

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
  hosts: [""]
  username: "elastic"
  password: "8xwkbi_Qb-vPMTkEp1GI"
  # If using Elasticsearch's default certificate
  ssl.ca_trusted_fingerprint: "6238A066B437DCF043B383230A9C9FB20051F132A183D1A4597C0AE3458A8204"
  host: ""

# ------------------------------ Logstash Output -------------------------------
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

  - # Add forwarded to tags when processing data from a network tap or mirror.
    if.contains.tags: forwarded
      - drop_fields:
          fields: [host]
      - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - detect_mime_type:
      field: http.request.body.content
      target: http.request.mime_type
  - detect_mime_type:
      field: http.response.body.content
      target: http.response.mime_type

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Packetbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Packetbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.

# ============================== Instrumentation ===============================

# Instrumentation support for the packetbeat.
    # Set to true to enable instrumentation of packetbeat.
    #enabled: false

    # Environment in which packetbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.

    # Secret token for the APM Server(s).

# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

connection refused

C:\Users\tarik>curl -u "tarik:ROOTROOT"
curl: (7) Failed to connect to port 5061: Connection refused

The password looks to be below you just posted this so you'll need to change this.

username: "elastic"
password: "8xwkbi_Qb-vPMTkEp1GI"

C:\Users\tarik>curl -u "elastic:8xwkbi_Qb-vPMTkEp1GI" <!--- FIXED TYPO

You also have setup.kibana in twice.. in the packetbeat.yml you should clean up 1

# This requires a Kibana endpoint configuration.

  # Kibana Host


  host: "" <!--- Should be 5601

EDIT NOTE ^^^^ HERE was the Typo should be 5601

also packetbeat runs as the packetbeat user I believe you should make sure the curl works when executed by that user (which is not root I believe)

i've opened the cmd as an administrator and this is what i've got

C:\Windows\system32>curl -u "elastic:8xwkbi_Qb-vPMTkEp1GI"
curl: (7) Failed to connect to port 5601: Connection refused

i've clean up the first one with an #

i'm lost , the problem is when i do open the kibana interface with the browser it works , but i really don't get what i'm missing to find the solution !!

I am not as familiar with curl on windows perhaps the -u does not work you can also try the format

Also try the curl with -v for verbose.. it should give more detail.

curl -v "http://elastic:8xwkbi_Qb-vPMTkEp1GI@"

The connection refused looks like network when you get a 401 it is auth....

You can also try telnet

telnet 5061

should look something like...

hyperion:8.1.2 sbrown$ telnet 5601
Connected to localhost.
Escape character is '^]'.
^CConnection closed by foreign host.

it's a problem of autorisation , the windows client cannot join the server

C:\Windows\system32>curl -v "http://elastic:8xwkbi_Qb-vPMTkEp1GI@"
*   Trying
* connect to port 5061 failed: Connection refused
* Failed to connect to port 5061: Connection refused
* Closing connection 0
curl: (7) Failed to connect to port 5061: Connection refused

i've did a simple ping to see if they can communicate and it works

C:\Windows\system32>ping -t

Envoi d’une requête 'Ping' avec 32 octets de données :
Réponse de : octets=32 temps<1ms TTL=64
Réponse de : octets=32 temps=1 ms TTL=64
Réponse de : octets=32 temps=1 ms TTL=64
Réponse de : octets=32 temps=1 ms TTL=64
Réponse de : octets=32 temps=1 ms TTL=64

Statistiques Ping pour
    Paquets : envoyés = 5, reçus = 5, perdus = 0 (perte 0%),
Durée approximative des boucles en millisecondes :
    Minimum = 0ms, Maximum = 1ms, Moyenne = 0ms

just for the record , i've already configurate winlogbeat and it works , i really don't know why packetbeat doesn't work !!! it's make me confuse !!

Ping does not mean the 2 servers can communicate on port 5601 over tcp.

That curl -v looks like a connectivity issue not authorization to me, although there is not as much information

The curl failing is a problem that is basically the command that is run during setup. If that does not work ... setup will not work.

Did you try the telnet? That will try to connect over tcp to port 5601

telnet 5601

yeah i've tried telnet but it shows me a black window , when i press Crtl + C , the message error is

HTTP/1.1 400 Bad Request
connection lost

That is actually good I think! it means it connected and got a bad HTTP response code

I do not know either what is going on... there is something basic going happening.

Silly question the username and password are correct?

And you said on this machine you can login into Kibana through the browser at:

if so after you authenticate with username and password (because you should have to authenticate) if it is already logged in.. log out and then log back in

Then go to the browser and put in

What happens?

i think u've did a mistake in the port , so now i've check the commande

curl -v "http://elastic:8xwkbi_Qb-vPMTkEp1GI@"

and this is what i've got

C:\Windows\system32>curl -v "http://elastic:8xwkbi_Qb-vPMTkEp1GI@"
*   Trying
* Connected to ( port 5601 (#0)
* Server auth using Basic with user 'elastic'
> GET /api/status HTTP/1.1
> Host:
> Authorization: Basic ZWxhc3RpYzo4eHdrYmlfUWItdlBNVGtFcDFHSQ==
> User-Agent: curl/7.79.1
> Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< x-content-type-options: nosniff
< referrer-policy: no-referrer-when-downgrade
< kbn-name: elasticserver-virtual-machine
< kbn-license-sig: d1f8df529765f368c84df7290e66279f33c468835f2a876b7da5eb47eb50b9df
< content-type: application/json; charset=utf-8
< cache-control: private, no-cache, no-store, must-revalidate
< content-length: 12845
< vary: accept-encoding
< accept-ranges: bytes
< Date: Mon, 02 May 2022 02:32:49 GMT
< Connection: keep-alive
< Keep-Alive: timeout=120
{"name":"elasticserver-virtual-machine","uuid":"0a91d32b-6888-4e2e-a640-ecc9efb5686d","version":{"number":"8.1.3","build_hash":"c44c8c44c82ed80d1ae3dd990291dcc85b7a27dc","build_number":50723,"build_snapshot":false},"status":{"overall":{"level":"available","summary":"All services are available"},"core":{"elasticsearch":{"level":"available","summary":"Elasticsearch is available","meta":{"warningNodes":[],"incompatibleNodes":[]}},"savedObjects":{"level":"available","summary":"SavedObjects service has completed migrations and is available","meta":{"migratedIndices":{"migrated":0,"skipped":0,"patched":2}}}},"plugins":{"advancedSettings":{"level":"available","summary":"All dependencies are available"},"bfetch":{"level":"available","summary":"All dependencies are available"},"expressionGauge":{"level":"available","summary":"All dependencies are available"},"expressionHeatmap":{"level":"available","summary":"All dependencies are available"},"expressionMetricVis":{"level":"available","summary":"All dependencies are available"},"expressionPie":{"level":"available","summary":"All dependencies are available"},"expressionTagcloud":{"level":"available","summary":"All dependencies are available"},"charts":{"level":"available","summary":"All dependencies are available"},"console":{"level":"available","summary":"All dependencies are available"},"controls":{"level":"available","summary":"All dependencies are available"},"customIntegrations":{"level":"available","summary":"All dependencies are available"},"dashboard":{"level":"available","summary":"All dependencies are available"},"data":{"level":"available","summary":"All dependencies are available"},"dataViewEditor":{"level":"available","summary":"All dependencies are available"},"dataViewFieldEditor":{"level":"available","summary":"All dependencies are available"},"dataViewManagement":{"level":"available","summary":"All dependencies are available"},"dataViews":{"level":"available","summary":"All dependencies are available"},"devTools":{"level":"available","summary":"All dependencies are available"},"discover":{"level":"available","summary":"All dependencies are available"},"embeddable":{"level":"available","summary":"All dependencies are available"},"esUiShared":{"level":"available","summary":"All dependencies are available"},"expressionError":{"level":"available","summary":"All dependencies are available"},"expressionImage":{"level":"available","summary":"All dependencies are available"},"expressionMetric":{"level":"available","summary":"All dependencies are available"},"expressionRepeatImage":{"level":"available","summary":"All dependencies are available"},"expressionRevealImage":{"level":"available","summary":"All dependencies are available"},"expressionShape":{"level":"available","summary":"All dependencies are available"},"expressions":{"level":"available","summary":"All dependencies are available"},"fieldFormats":{"level":"available","summary":"All dependencies are available"},"home":{"level":"available","summary":"All dependencies are available"},"inputControlVis":{"level":"available","summary":"All dependencies are available"},"inspector":{"level":"available","summary":"All dependencies are available"},"kibanaOverview":{"level":"available","summary":"All dependencies are available"},"kibanaReact":{"level":"available","summary":"All dependencies are available"},"kibanaUsageCollection":{"level":"available","summary":"All dependencies are available"},"kibanaUtils":{"level":"available","summary":"All dependencies are available"},"management":{"level":"available","summary":"All dependencies are available"},"mapsEms":{"level":"available","summary":"All dependencies are available"},"navigation":{"level":"available","summary":"All dependencies are available"},"newsfeed":{"level":"available","summary":"All dependencies are available"},"presentationUtil":{"level":"available","summary":"All dependencies are available"},"savedObjects":{"level":"available","summary":"All dependencies are available"},"savedObjectsManagement":{"level":"available","summary":"All dependencies are available"},"savedObjectsTaggingOss":{"level":"available","summary":"All dependencies are available"},"screenshotMode":{"level":"available","summary":"All dependencies are available"},"share":{"level":"available","summary":"All dependencies are available"},"sharedUX":{"level":"available","summary":"All dependencies are available"},"telemetry":{"level":"available","summary":"All dependencies are available"},"telemetryCollectionManager":{"level":"available","summary":"All dependencies are available"},"telemetryManagementSection":{"level":"available","summary":"All dependencies are available"},"uiActions":{"level":"available","summary":"All dependencies are available"},"urlForwarding":{"level":"available","summary":"All dependencies are available"},"usageCollection":{"level":"available","summary":"All dependencies are available"},"visDefaultEditor":{"level":"available","summary":"All dependencies are available"},"visTypeMarkdown":{"level":"available","summary":"All dependencies are available"},"visTypeHeatmap":{"level":"available","summary":"All dependencies are available"},"visTypeMetric":{"level":"available","summary":"All dependencies are available"},"visTypePie":{"level":"available","summary":"All dependencies are available"},"visTypeTable":{"level":"available","summary":"All dependencies are available"},"visTypeTagcloud":{"level":"available","summary":"All dependencies are available"},"visTypeTimelion":{"level":"available","summary":"All dependencies are available"},"visTypeTimeseries":{"level":"available","summary":"All dependencies are available"},"visTypeVega":{"level":"available","summary":"All dependencies are available"},"visTypeVislib":{"level":"available","summary":"All dependencies are available"},"visTypeXy":{"level":"available","summary":"All dependencies are available"},"visualizations":{"level":"available","summary":"All dependencies are available"},"actions":{"level":"available","summary":"All dependencies are available"},"alerting":{"level":"available","summary":"Alerting is (probably) ready"},"apm":{"level":"available","summary":"All dependencies are available"},"banners":{"level":"available","summary":"All dependencies are available"},"canvas":{"level":"available","summary":"All dependencies are available"},"cases":{"level":"available","summary":"All dependencies are available"},"cloud":{"level":"available","summary":"All dependencies are available"},"crossClusterReplication":{"level":"available","summary":"All dependencies are available"},"dashboardEnhanced":{"level":"available","summary":"All dependencies are available"},"dataEnhanced":{"level":"available","summary":"All dependencies are available"},"dataVisualizer":{"level":"available","summary":"All dependencies are available"},"discoverEnhanced":{"level":"available","summary":"All dependencies are available"},"urlDrilldown":{"level":"available","summary":"All dependencies are available"},"embeddableEnhanced":{"level":"available","summary":"All dependencies are available"},"encryptedSavedObjects":{"level":"available","summary":"All dependencies are available"},"enterpriseSearch":{"level":"available","summary":"All dependencies are available"},"eventLog":{"level":"available","summary":"All dependencies are available"},"features":{"level":"available","summary":"All dependencies are available"},"fileUpload":{"level":"available","summary":"All dependencies are available"},"fleet":{"level":"available","summary":"Fleet is available"},"globalSearch":{"level":"available","summary":"All dependencies are available"},"globalSearchBar":{"level":"available","summary":"All dependencies are available"},"globalSearchProviders":{"level":"available","summary":"All dependencies are available"},"graph":{"level":"available","summary":"All dependencies are available"},"grokdebugger":{"level":"available","summary":"All dependencies are available"},"indexLifecycleManagement":{"level":"available","summary":"All dependencies are available"},"indexManagement":{"level":"available","summary":"All dependencies are available"},"infra":{"level":"available","summary":"All dependencies are available"},"ingestPipelines":{"level":"available","summary":"All dependencies are available"},"lens":{"level":"available","summary":"All dependencies are available"},"licenseApiGuard":{"level":"available","summary":"All dependencies are available"},"licenseManagement":{"level":"available","summary":"All dependencies are available"},"licensing":{"level":"available","summary":"License fetched"},"lists":{"level":"available","summary":"All dependencies are available"},"logstash":{"level":"available","summary":"All dependencies are available"},"maps":{"level":"available","summary":"All dependencies are available"},"ml":{"level":"available","summary":"All dependencies are available"},"monitoring":{"level":"available","summary":"All dependencies are available"},"observability":{"level":"available","summary":"All dependencies are available"},"osquery":{"level":"available","summary":"All dependencies are available"},"painlessLab":{"level":"available","summary":"All dependencies are available"},"remoteClusters":{"level":"available","summary":"All dependencies are available"},"reporting":{"level":"available","summary":"All dependencies are available"},"rollup":{"level":"available","summary":"All dependencies are available"},"ruleRegistry":{"level":"available","summary":"All dependencies are available"},"runtimeFields":{"level":"available","summary":"All dependencies are available"},"savedObjectsTagging":{"level":"available","summary":"All dependencies are available"},"screenshotting":{"level":"available","summary":"All dependencies are available"},"searchprofiler":{"level":"available","summary":"All dependencies are available"},"security":{"level":"available","summary":"All dependencies are available"},"securitySolution":{"level":"available","summary":"All dependencies are available"},"snapshotRestore":{"level":"available","summary":"All dependencies are available"},"spaces":{"level":"available","summary":"All dependencies are available"},"stackAlerts":{"level":"available","summary":"All dependencies are available"},"taskManager":{"level":"available","summary":"All dependencies are available"},"telemetryCollectionXpack":{"level":"available","summary":"All dependencies are available"},"timelines":{"level":"available","summary":"All dependencies are available"},"transform":{"level":"available","summary":"All dependencies are available"},"translations":{"level":"available","summary":"All dependencies are available"},"triggersActionsUi":{"level":"available","summary":"All dependencies are available"},"uiActionsEnhanced":{"level":"available","summary":"All dependencies are available"},"upgradeAssistant":{"level":"available","summary":"All dependencies are available"},"uptime":{"level":"available","summary":"All dependencies are available"},"watcher":{"level":"available","summary":"All dependencies are available"}}},"metrics":{"last_updated":"2022-05-02T02:32:48.130Z","collection_interval_in_millis":5000,"os":{"platform":"linux","platformRelease":"linux-5.13.0-40-generic","load":{"1m":0.01,"5m":0.03,"15m":0.06},"memory":{"total_in_bytes":6185930752,"free_in_bytes":1066373120,"used_in_bytes":5119557632},"uptime_in_millis":8058490,"distro":"Ubuntu","distroRelease":"Ubuntu-20.04","cpuacct":{"control_group":"/","usage_nanos":1923934009225},"cpu":{"control_group":"/","cfs_period_micros":100000,"cfs_quota_micros":-1,"stat":{"number_of_elapsed_periods":0,"number_of_times_throttled":0,"time_throttled_nanos":0}}},"process":{"memory":{"heap":{"total_in_bytes":513097728,"used_in_bytes":397529128,"size_limit":2197815296},"resident_set_size_in_bytes":614649856},"pid":3788,"event_loop_delay":10.91568763238512,"event_loop_delay_histogram":{"min":9.019392,"max":25.100287,"mean":10.91568763238512,"exceeds":0,"stddev":0.7090361000363103,"fromTimestamp":"2022-05-02T02:32:43.130Z","lastUpdatedAt":"2022-05-02T02:32:48.126Z","percentiles":{"50":10.969087,"75":10.985471,"95":11.042815,"99":11.108351}},"uptime_in_millis":6412400.1225310005},"processes":[{"memory":{"heap":{"total_in_bytes":513097728,"used_in_bytes":397529128,"size_limit":2197815296},"resident_set_size_in_bytes":614649856},"pid":3788,"event_loop_delay":10.91568763238512,"event_loop_delay_histogram":{"min":9.019392,"max":25.100287,"mean":10.91568763238512,"exceeds":0,"stddev":0.7090361000363103,"fromTimestamp":"2022-05-02T02:32:43.130Z","lastUpdatedAt":"2022-05-02T02:32:48.126Z","percentiles":{"50":10.969087,"75":10.985471,"95":11.042815,"99":11.108351}},"uptime_in_millis":6412400.1225310005}],"response_times":{"avg_in_millis":0,"max_in_millis":0},"concurrent_connections":0,"requests":{"disconnects":0,"total":0,"statusCodes":{},"status_codes":{}}}}* Connection #0 to host left intact

and when i use the telnet command, it's doesn't work

Telnet DID work... exactly as it should... it connected but does not speak HTTP

This is where I got the 5061... from your packetbeat.yml :slight_smile:

So this all comes down to a typo :slight_smile:

I am going to go back and fix all the port numbers so no one else get confused.

Apologies I should have seen that right away... darn I should have seen that is your very first post :frowning:

1 Like

@TARIK_MAZOUZ Did you get setup to work now with the Fixed Port Number? Please let us know so we can mark this solved.

1 Like

first thing , i wanna thank u for your time , now everything seems right from the windows client,

Loading dashboards (Kibana must be running and reachable)
{"log.level":"info","@timestamp":"2022-05-02T02:52:50.020Z","log.logger":"kibana","log.origin":{"":"kibana/client.go","file.line":182},"message":"Kibana url:","":"packetbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-05-02T02:52:50.165Z","log.logger":"kibana","log.origin":{"":"kibana/client.go","file.line":182},"message":"Kibana url:","":"packetbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-05-02T02:52:52.776Z","log.logger":"add_cloud_metadata","log.origin":{"":"add_cloud_metadata/add_cloud_metadata.go","file.line":101},"message":"add_cloud_metadata: hosting provider type not detected.","":"packetbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-05-02T02:53:03.650Z","log.origin":{"":"instance/beat.go","file.line":849},"message":"Kibana dashboards successfully loaded.","":"packetbeat","ecs.version":"1.6.0"}
Loaded dashboards

i've looked in kibana interface to see packetbeat receives something , but i found nothing :frowning: