Kibana7.17

Hello, I'm working on a uni project where I need to use the curator tool. However, it is not compatible with ES8.11, so I configured a three-node cluster (data_frozen, data_hot, data_cold). The cluster health is green, and all things are up. But when it comes to Kibana, it won't start normally. I've tried multiple things to make it work, but the results are the same. Also, when I start Kibana, the log file won't be created manually. Even when I create it in /var/log, it won't show any message, even when I uncomment the verbosity line. This is not helping me to determine the issue.
here is my kibana.yml file

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.50.222"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.50.222:9200"]
elasticsearch.hosts: ["http://192.168.50.223:9200"]
elasticsearch.hosts: ["http://192.168.50.224:9200"]
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "i changed the pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
logging.verbose: true

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

here the output when i start kibana

[root@elasticnode1 tmp]# systemctl start kibana
[root@elasticnode1 tmp]# systemctl status kibana
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Sat 2024-01-20 13:16:06 EST; 2s ago
     Docs: https://www.elastic.co
  Process: 37249 ExecStart=/usr/share/kibana/bin/kibana --logging.dest=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid --de>
 Main PID: 37249 (code=exited, status=1/FAILURE)
[root@elasticnode1 tmp]# tail -100f /var/log/kibana/kibana.log
tail: cannot open '/var/log/kibana/kibana.log' for reading: No such file or directory
tail: no files remaining
[root@elasticnode1 tmp]# systemctl status kibana
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2024-01-20 13:16:21 EST; 22s ago
     Docs: https://www.elastic.co
  Process: 37277 ExecStart=/usr/share/kibana/bin/kibana --logging.dest=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid --de>
 Main PID: 37277 (code=exited, status=1/FAILURE)

Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Service RestartSec=3s expired, scheduling restart.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: Stopped Kibana.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Start request repeated too quickly.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Failed with result 'exit-code'.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: Failed to start Kibana.
 ESCOC

or preset: disabled)
EST; 22s ago

st=/var/log/kibana/kibana.log --pid.file=/run/kibana/kibana.pid --deprecation.skip_deprecated_settings[0]=logging.dest (code=exited, sta>


 Service RestartSec=3s expired, scheduling restart.
 Scheduled restart job, restart counter is at 3.

 Start request repeated too quickly.
 Failed with result 'exit-code'.
 Kibana.

the OS is redhat is

NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"

Hi @Youssef_Shehadeh,

Welcome! Judging by the below error messages the logs are not in the location you are tailing:

Can you share the output of the logs? As per the documentation, since you are starting using systemctl you may need to use the below command to access the logs:

journalctl -u kibana.service

Hope that helps.

thank you for your note
here is the content of the command

Jan 20 13:16:15 elasticnode1.localdomain kibana[37277]: Kibana is currently running with legacy OpenSSL providers enabled! For details a>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]: FATAL CLI ERROR YAMLException: duplicated mapping key at line 33, column 1:
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     elasticsearch.hosts: ["http://19 ...
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     ^
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at generateError (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at throwError (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.js:>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at storeMappingPair (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/load>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at readBlockMapping (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/load>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at composeNode (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.js>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at readDocument (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.j>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at loadDocuments (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at load (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.js:1614:1>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at safeLoad (/usr/share/kibana/node_modules/js-yaml/lib/js-yaml/loader.js:16>
Jan 20 13:16:18 elasticnode1.localdomain kibana[37277]:     at readYaml (/usr/share/kibana/node_modules/@kbn/config/target_node/raw/read>
Jan 20 13:16:18 elasticnode1.localdomain systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 13:16:18 elasticnode1.localdomain systemd[1]: kibana.service: Failed with result 'exit-code'.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Service RestartSec=3s expired, scheduling restart.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Scheduled restart job, restart counter is at 3.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: Stopped Kibana.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Start request repeated too quickly.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: kibana.service: Failed with result 'exit-code'.
Jan 20 13:16:21 elasticnode1.localdomain systemd[1]: Failed to start Kibana.

i noticed where the error is but i have used the same configuration on ES8.11 and no error was triggered

FATAL CLI ERROR YAMLException: duplicated mapping key at line 33, column 1:

Hi @Youssef_Shehadeh,

Looking at the above error you have duplicate entries for elasticsearch.hosts, which I can also see in your configuration:

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.50.222:9200"]
elasticsearch.hosts: ["http://192.168.50.223:9200"]
elasticsearch.hosts: ["http://192.168.50.224:9200"]

You need to change your config to a single property with all hosts in a single array:

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.50.222:9200", "http://192.168.50.223:9200", "http://192.168.50.224:9200"]

Try that out and let us know if that resolves your issue.

1 Like

thank you the problem is solved

1 Like

sorry but i have a few questions, can i ask here ?

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.