Can't load dashboards from Windows version of packetbeat to Kibana on Security Onion

Seems like problems loading dashboards is a common theme with beats. I've looked over all the forum posts I can find and I feel like I've just gone down deeper and deeper rabbit holes and not found any solution to my problem.

I am new to the ELK stack, beats and Security Onion. I have a distributed Security Onion setup. I am trying to get packetbeat to ship DNS logs to my Security Onion master server. The DNS server with packetbeat installed is running Windows Server 2016 (build 1607).

With some effort I can get the packetbeat service started (editing the packetbeat.yml configuration file seems to break it more often than not, so it's a bit of a dance to do any kind of updating or testing). However, I can't seem to get it to load the dashboards. My SO Master server and DNS server are in different subnets, but they can route to each other and there are no ACLs blocking traffic between the two subnets. Additionally, I've disabled the firewall on the DNS server and ports tcp/udp 9200 and tcp/udp 5601 are open in both directions on the SO master server. Running the Powershell cmdlet "Test-NetConnection" from the DNS server I am able to reach the SO master server on tcp/9200 but not on tcp/5601.

At the moment I suspect there might be an authentication issue that's preventing the dashboards from loading, but I'm not positive.

This is the error I am seeing:

Powershell error output on DNS server after running ".\packetbeat.exe setup --dashboards"

http://10.0.20.7:5601/api/status fails: fail to execute the HTTP GET request: Get http://10.0.20.7:5601/api/status: dial tcp 10.0.20.7:5601: connectex:
No connection could be made because the target machine actively refused it.. Response: .

The packetbeat log files essentially say the same thing.

Kibana and Elasticsearch is version 6.8.6. Packetbeat is version 7.6.0. Also, I should mention that I'll eventually be shipping the logs to logstash, not elasticsearch, but my understanding is that you have to have elasticsearch enabled in the packetbeat.yml config file in order to load the dashboards.

This is my packetbeat.yml configuration file with usernames and passwords sanitized.

Any assistance is appreciated. Thanks.

#################### Packetbeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The packetbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/packetbeat/index.html

#============================== Network device ================================

# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: 0

#================================== Flows =====================================

# Set `enabled: false` or comment out all options to disable flows reporting.
packetbeat.flows:
  # Set network flow timeout. Flow is killed if no packet is received before being
  # timed out.
  timeout: 30s

  # Configure reporting period. If set to -1, only killed flows will be reported
  period: 10s

#========================== Transaction protocols =============================

packetbeat.protocols:
- type: icmp
  # Enable ICMPv4 and ICMPv6 monitoring. Default: false
  #enabled: true

- type: amqp
  # Configure the ports where to listen for AMQP traffic. You can disable
  # the AMQP protocol by commenting out the list of ports.
  #ports: [5672]

- type: cassandra
  #Cassandra port for traffic monitoring.
  #ports: [9042]

- type: dhcpv4
  # Configure the DHCP for IPv4 ports.
  #ports: [67, 68]

- type: dns
  # Configure the ports where to listen for DNS traffic. You can disable
  # the DNS protocol by commenting out the list of ports.
  ports: [53]

- type: http
  # Configure the ports where to listen for HTTP traffic. You can disable
  # the HTTP protocol by commenting out the list of ports.
  #ports: [80, 8080, 8000, 5000, 8002]

- type: memcache
  # Configure the ports where to listen for memcache traffic. You can disable
  # the Memcache protocol by commenting out the list of ports.
  #ports: [11211]

- type: mysql
  # Configure the ports where to listen for MySQL traffic. You can disable
  # the MySQL protocol by commenting out the list of ports.
  #ports: [3306,3307]

- type: pgsql
  # Configure the ports where to listen for Pgsql traffic. You can disable
  # the Pgsql protocol by commenting out the list of ports.
  #ports: [5432]

- type: redis
  # Configure the ports where to listen for Redis traffic. You can disable
  # the Redis protocol by commenting out the list of ports.
  #ports: [6379]

- type: thrift
  # Configure the ports where to listen for Thrift-RPC traffic. You can disable
  # the Thrift-RPC protocol by commenting out the list of ports.
  #ports: [9090]

- type: mongodb
  # Configure the ports where to listen for MongoDB traffic. You can disable
  # the MongoDB protocol by commenting out the list of ports.
  #ports: [27017]

- type: nfs
  # Configure the ports where to listen for NFS traffic. You can disable
  # the NFS protocol by commenting out the list of ports.
  #ports: [2049]

- type: tls
  # Configure the ports where to listen for TLS traffic. You can disable
  # the TLS protocol by commenting out the list of ports.
  #ports:
   # - 443   # HTTPS
   # - 993   # IMAPS
   # - 995   # POP3S
   # - 5223  # XMPP over SSL
   # - 8443
   # - 8883  # Secure MQTT
   # - 9243  # Elasticsearch

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "10.0.20.7:5601"
  username: "USER1"
  password: "PASSWORD1"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Packetbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["10.0.20.7:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "PASSWORD2"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# packetbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Packetbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Since making this post, I have gone in and replaced packetbeat 7.6.0 with packetbeat 6.8.0 just in case it was a version compatibility issue. It wasn't. I still get the same problem.

However, it appears that tcp/5601 is only listening on 127.0.0.1 on my SO master server. I've read many, many other forum posts/kbase articles/etc that say that if you want Kibana to listen on all interfaces instead of just the loopback, you have to alter the config file for Kibana at /etc/kibana/kibana.yml so that the "server.host:" entry reads:

server.host: "0.0.0.0"

I have made this change to the configuration and then restarted Kibana with "so-kibana-restart". No change. tcp/5601 is still only binding to the loopback interface. I have even restarted the SO master server just in case, and still no change.

Does the change need to be made elsewhere on a Security Onion installation of Kibana? Any help here is appreciated.

And no, so-allow does not work in this instance.

Anyone? Anyone? Bueller?

Well, I was unable to solve the actual problem. What I ended up doing was installing packetbeat on my SO master server just so I could load the dashboards into Kibana from the localhost. Personally, I find this solution to be unsatisfactory. But whatever, I have dashboards for packetbeat now and I can move on with my life.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.