We are trying to make pipeline as filebeat to logstash to elasticsearch but facing issue with connectivity and version compatibility

Hi Leandro,

NO issues,

we have enabled tls connection between elasticsearch and kibana.

root@darshan-elk-elmaster:/usr/share/elasticsearch/bin# curl -X GET -u elastic:8Qw3_B58rfrMaH+2QXj6 https://darshan-elk-elmaster:9200/ --cacert /etc/elasticsearch/certs/ca/ca.crt
{
  "name" : "darshan-elk-elmaster",
  "cluster_name" : "elk-cluster",
  "cluster_uuid" : "o6DPDkRXQMaYzqD2shwMFg",
  "version" : {
    "number" : "8.13.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "16cc90cd2d08a3147ce02b07e50894bc060a4cbf",
    "build_date" : "2024-04-05T14:45:26.420424304Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

our elasticsearch is active and cluster health is green.

Issue: kibana is able to active temporary. it goes down due to port. I'm not able to start 5601 port thats why my dashboard also not opening.

here I'm attaching kibana configuration:

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
server.publicBaseUrl: "https://darshan-elk-dashboard:5601"

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
server.name: "darshan-elk-dashboard"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificateAuthorities: ["/etc/kibana/certs/elasticsearch/ca.crt"]
server.ssl.certificate: /etc/kibana/certs/kibana/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana/kibana.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://darshan-elk-elmaster:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "admin"
#elasticsearch.password: "admin"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/elasticsearch/ca-crt" ]
  #elasticsearch.username: "admin"
  #elasticsearch.password: "admin"
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

we need your short and simple help to start dashboard beacause it is not reachable apart from this everything is fine.

if you want any further info then please ask.

Regards,
Darshan Rawal

Hi Darshan, Is your kibana process is running -

  1. if you can verify this

ps -ef | grep kibana

  1. If yes then on same machine, try telnet localhost 5601 , to check whether port is accessible from same machine or not.

Hi Ashish,

Apologies for delay, I'm Attaching screenshot of above mentioned commands:

any thing else we need to verify?

Regards,
Darshan Rawal

It seems kibana is not running. Please start and do telnet.

Also is your kibana is running on same machine or different one ?

Yes service is running now.

but in status its indicating this type of error:

[ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. getaddrinfo EAI_AGAIN darshan-elk-elmaster

Have you checked ES logs? Is ES reachable by curl?

port is opened dahsboard is able to able to open but

this type of message occuring.

Regards,
Darshan Rawal

These are the logs of kibana:

{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:45.983+00:00","message":"TaskManager is identified by the Kibana UUID: 21edf948-1850-4a37-96cf-efdfe764987f","log":{"level":"INFO","logger":"plugins.taskManager"},"process":{"pid":41612,"uptime":10.378709308},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:46.236+00:00","message":"CustomBrandingService registering plugin: customBranding","log":{"level":"INFO","logger":"custom-branding-service"},"process":{"pid":41612,"uptime":10.63211802},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:46.670+00:00","message":"Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 22.04 OS. Automatically enabling Chromium sandbox.","log":{"level":"INFO","logger":"plugins.screenshotting.config"},"process":{"pid":41612,"uptime":11.066204686},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:46.949+00:00","message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":41612,"uptime":11.345403043},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:46.969+00:00","message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.security.config"},"process":{"pid":41612,"uptime":11.364694706},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:47.082+00:00","message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.encryptedSavedObjects"},"process":{"pid":41612,"uptime":11.477514739},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:47.187+00:00","message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.actions"},"process":{"pid":41612,"uptime":11.582732414},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:47.198+00:00","message":"Email Service Error: Email connector not specified.","log":{"level":"INFO","logger":"plugins.notifications"},"process":{"pid":41612,"uptime":11.593303719},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:47.388+00:00","message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.alerting"},"process":{"pid":41612,"uptime":11.784366563},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:47.389+00:00","message":"using indexes and aliases for persisting alerts","log":{"level":"INFO","logger":"plugins.alerting"},"process":{"pid":41612,"uptime":11.784768089},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:48.488+00:00","message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.","log":{"level":"WARN","logger":"plugins.reporting.config"},"process":{"pid":41612,"uptime":12.88436123},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:48.490+00:00","message":"Overriding server host address \"0.0.0.0\" in Reporting runtime config, using \"xpack.reporting.kibanaServer.hostname: localhost\".","log":{"level":"INFO","logger":"plugins.reporting.config"},"process":{"pid":41612,"uptime":12.885391444},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:48.963+00:00","message":"Registered task successfully [Task: cloud_security_posture-stats_task]","log":{"level":"INFO","logger":"plugins.cloudSecurityPosture"},"process":{"pid":41612,"uptime":13.35881411},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:50.934+00:00","message":"Registering endpoint:user-artifact-packager task with timeout of [20m], interval of [60s] and policy update batch size of [25]","log":{"level":"INFO","logger":"plugins.securitySolution.endpoint:user-artifact-packager:1.0.0"},"process":{"pid":41612,"uptime":15.33032968},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:51.244+00:00","message":"Server is NOT enabled","log":{"level":"INFO","logger":"plugins.assetManager"},"process":{"pid":41612,"uptime":15.640231937},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:51.572+00:00","message":"Unable to retrieve version information from Elasticsearch nodes. getaddrinfo EAI_AGAIN darshan-elk-elmaster","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":41612,"uptime":15.968117443},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-04-16T07:19:51.883+00:00","message":"Browser executable: /usr/share/kibana/node_modules/@kbn/screenshotting-plugin/chromium/headless_shell-linux_x64/headless_shell","log":{"level":"INFO","logger":"plugins.screenshotting.chromium"},"process":{"pid":41612,"uptime":16.278760884},"trace":{"id":"1143d5ac9be89ca231aa1f10b94cdae3"},"transaction":{"id":"5e07d0f381a5b8c1"}}

Regards,
Darshan Rawal

HI Rios,

Yes, its reachable.

Regards,
Darshan Rawal

Is it reachable from the kibana machine?

Run the curl command you ran in your elasticsearch machine, but on your Kibana server:

curl -X GET -u elastic:8Qw3_B58rfrMaH+2QXj6 https://darshan-elk-elmaster:9200/ --cacert /path/to/the/ca/file.crt

The error you shared means that Kibana cannot connect to Elasticsearch, it is a network issue, getaddrinfo EAI_AGAIN means that your kibana host cannot resolve the domain you used to your Elasticsearch.

What is the result of ping darshan-elk-elmaster in your kibana server? If it cannot resolve the host, then it will not be able to connect to elasticsearch, then your Kibana will not work.

You need to fix the network issue first.

HI Leandro,

It's the result of above command i have runnined this command in elasticsearch machine:

{
  "name" : "darshan-elk-elmaster",
  "cluster_name" : "elk-cluster",
  "cluster_uuid" : "o6DPDkRXQMaYzqD2shwMFg",
  "version" : {
    "number" : "8.13.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "16cc90cd2d08a3147ce02b07e50894bc060a4cbf",
    "build_date" : "2024-04-05T14:45:26.420424304Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

and i'm not able to ping the darshan-elk-elmaster dns.

Regards,
Darshan Rawal

Is this from the result of the curl from kibana machine? You didn't share the host you run it.

Does telnet from kibana machine to your elasticsearch machine on port 9200 works? If this works then you shouln't have issues on Kibana log, so something is not right.

Please test the telnet from the Kibana machine to the elasticsearch machine on port 9200 and share the result.

root@darshan-elk-dashboard:/usr/share# curl -X GET -u elastic:8Qw3_B58rfrMaH+2QXj6 https://darshan-elk-elmaster:9200/ --cacert /etc/kibana/certs/elasticsearch/ca.crt
curl: (6) Could not resolve host: darshan-elk-elmaster

root@darshan-elk-dashboard:/usr/share# telnet 10.101.1.131 9200
Trying 10.101.1.131...
telnet: Unable to connect to remote host: No route to host

root@darshan-elk-dashboard:/usr/share# telnet darshan-elk-elmaster 9200
telnet: could not resolve darshan-elk-elmaster/9200: Temporary failure in name resolution

We are not able to figure it we have add the host entry also but still facing same issue.

We are able to ping on elasticsearch machine's ip from kibana. but not able to telnet .

if you want then i'll share config?

Regards,
Darshan Rawal

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elk-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: darshan-elk-elmaster
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: darshan-elk-elmaster
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 15-04-2024 16:13:55
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  certificate: certs/elastic/elastic.crt
  key: certs/elastic/elastic.key
  certificate_authorities: certs/ca/ca.crt
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["darshan-elk-elmaster"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
server.publicBaseUrl: "https://darshan-elk-dashboard:5601"

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
server.name: "darshan-elk-dashboard"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
server.ssl.enabled: true
server.ssl.certificateAuthorities: ["/etc/kibana/certs/elasticsearch/ca.crt"]
server.ssl.certificate: /etc/kibana/certs/kibana/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana/kibana.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["https://darshan-elk-elmaster:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "admin"
#elasticsearch.password: "admin"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/elasticsearch/ca-crt" ]
  #elasticsearch.username: "admin"
  #elasticsearch.password: "admin"
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

here is our both configuration please have look on it cluster health is also green.

Regards,
Darshan Rawal

You have a network issue.

Your Kibana server cannot connect to your Elasticsearch server, you need to solve this network issue for things to work, this is unrelated to Elastic tools.

Your kibana and elasticsearch configurations seems to be right, but it will not work until you fix the network issue.

ok once will figure it will update here,

Leandro i want to ask one question here should we use filebeat to elasticsearch to kibana directly for multiple indexer for our multiple customer. or should we have to go with filebeat to logstash to elasticsearch to kibana?

as i know we are able to run if else condition in filebeat and logstash both like if message contains this then it will forward into the that particular customer indexes correct?

Thanks & Regards,
Darshan Rawal

Hello Leandro,

Thank you so much for your great help and for you prestigious time.

Connectivity is established now we are able to access dashboard.

now we want to start to create multiple indexes for our multiple customer so please assist us what path we have to choose. to separate indexes.

Thanks & Regards,
Darshan Rawal