Kibana Service is not ready yet

Good day,

I'm new with ELK and its features. I've recently taken over my predecessors ELK server and it just started giving issues today.
"Kibana server is not ready yet" is what it displays.

server.port: 5601
server.host: "server-ip"
elasticsearch.hosts: ["http://server-ip:9200 24"]

Both my Elasticsearch service and Kibana Service is running.

Any assistance would be greatly appreciated.

Welcome to our community! :smiley:

What do the Kibana logs show?

Hi Warkolm,

Thank you :grin:

My logs show the below:

{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["info","http","server","Preboot"],"pid":111350,"message":"http server running at http://172.16.60.28:5601"}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["warning","config","deprecation"],"pid":111350,"message":"\"logging.dest\" has been deprecated and will be removed in 8.0. To set the destination moving forward, you can use the \"console\" appender in your logging configuration or define a custom one."}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["warning","config","deprecation"],"pid":111350,"message":"Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format."}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["warning","config","deprecation"],"pid":111350,"message":"The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set \"xpack.reporting.roles.enabled\" to \"false\" to adopt the future behavior before upgrading."}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["warning","config","deprecation"],"pid":111350,"message":"User sessions will automatically time out after 8 hours of inactivity starting in 8.0. Override this value to change the timeout."}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["warning","config","deprecation"],"pid":111350,"message":"Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout."}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["info","plugins-system","standard"],"pid":111350,"message":"Setting up [113] plugins: [translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]"}
{"type":"log","@timestamp":"2022-07-20T08:39:26+02:00","tags":["info","plugins","taskManager"],"pid":111350,"message":"TaskManager is identified by the Kibana UUID: 848f9ee5-4380-436a-9b88-7a7c04cdf970"}
{"type":"log","@timestamp":"2022-07-20T08:39:27+02:00","tags":["warning","plugins","security","config"],"pid":111350,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2022-07-20T08:39:27+02:00","tags":["warning","plugins","security","config"],"pid":111350,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2022-07-20T08:39:27+02:00","tags":["warning","plugins","reporting","config"],"pid":111350,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2022-07-20T08:39:27+02:00","tags":["info","plugins","ruleRegistry"],"pid":111350,"message":"Installing common resources shared between all indices"}
{"type":"log","@timestamp":"2022-07-20T08:39:28+02:00","tags":["info","plugins","reporting","config"],"pid":111350,"message":"Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"Starting saved objects migrations"}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana] INIT -> WAIT_FOR_YELLOW_SOURCE. took: 22ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] INIT -> WAIT_FOR_YELLOW_SOURCE. took: 30ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana] WAIT_FOR_YELLOW_SOURCE -> CHECK_UNKNOWN_DOCUMENTS. took: 19ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] WAIT_FOR_YELLOW_SOURCE -> CHECK_UNKNOWN_DOCUMENTS. took: 10ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana] CHECK_UNKNOWN_DOCUMENTS -> SET_SOURCE_WRITE_BLOCK. took: 11ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] CHECK_UNKNOWN_DOCUMENTS -> SET_SOURCE_WRITE_BLOCK. took: 10ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana] SET_SOURCE_WRITE_BLOCK -> CALCULATE_EXCLUDE_FILTERS. took: 9ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] SET_SOURCE_WRITE_BLOCK -> CALCULATE_EXCLUDE_FILTERS. took: 11ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana] CALCULATE_EXCLUDE_FILTERS -> CREATE_REINDEX_TEMP. took: 17ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] CALCULATE_EXCLUDE_FILTERS -> CREATE_REINDEX_TEMP. took: 15ms."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["error","savedobjects-service"],"pid":111350,"message":"[.kibana] Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_7.17.5_reindex_temp?wait_for_active_shards=all&timeout=60s error: [validation_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;,"}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["fatal","root"],"pid":111350,"message":"Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_7.17.5_reindex_temp?wait_for_active_shards=all&timeout=60s error: [validation_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;,\n    at migrationStateActionMachine (/usr/share/kibana/src/core/server/saved_objects/migrationsv2/migrations_state_action_machine.js:164:13)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at async Promise.all (index 0)\n    at SavedObjectsService.start (/usr/share/kibana/src/core/server/saved_objects/saved_objects_service.js:181:9)\n    at Server.start (/usr/share/kibana/src/core/server/server.js:330:31)\n    at Root.start (/usr/share/kibana/src/core/server/root/index.js:69:14)\n    at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:120:5)\n    at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:229:5)"}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","plugins-system","standard"],"pid":111350,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":111350,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["error","savedobjects-service"],"pid":111350,"message":"[.kibana_task_manager] Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_task_manager_7.17.5_reindex_temp?wait_for_active_shards=all&timeout=60s error: [validation_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;,"}

This attribute is very sensitive. I was facing this error before and noticed few more chars at the end.
Just make sure that your url is correct and accessible

elasticsearch.hosts: ["http://es1.fqdn:9200/","http://es2.fqdn:9200/"]
or
elasticsearch.hosts: ["https://es1.fqdn:9200/","https://es2.fqdn:9200/"]

Hi Swchandu

My URL is

elasticsearch.hosts: ["http://server_ip:9200"]

My URL shows the below when opening it.

{
  "name" : "srv-elk",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "I20YQs2OT_qgZZ0ItZCmHw",
  "version" : {
    "number" : "7.17.5",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "8d61b4f7ddf931f219e3745f295ed2bbc50c8e84",
    "build_date" : "2022-06-23T21:57:28.736740635Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["error","savedobjects-service"],"pid":111350,"message":"[.kibana] Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_7.17.5_reindex_temp?wait_for_active_shards=all&timeout=60s error: [validation_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;,"}
{"type":"log","@timestamp":"2022-07-20T08:39:29+02:00","tags":["fatal","root"],"pid":111350,"message":"Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Unexpected Elasticsearch ResponseError: statusCode: 400, method: PUT, url: /.kibana_7.17.5_reindex_temp?wait_for_active_shards=all&timeout=60s error: [validation_exception]: Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open;,\n at migrationStateActionMachine (/usr/share/kibana/src/core/server/saved_objects/migrationsv2/migrations_state_action_machine.js:164:13)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Promise.all (index 0)\n at SavedObjectsService.start (/usr/share/kibana/src/core/server/saved_objects/saved_objects_service.js:181:9)\n at Server.start (/usr/share/kibana/src/core/server/server.js:330:31)\n at Root.start (/usr/share/kibana/src/core/server/root/index.js:69:14)\n at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:120:5)\n at Command. (/usr/share/kibana/src/cli/serve/serve.js:229:5)"}

You would need to increase the shards/node
try increasing to any value (1000+ 4 required for kibana & kibana task manager indices)

PUT /_cluster/settings/
{
"persistent" : {
"cluster" : {
"max_shards_per_node" : "1500"
}
}
}

Hi Swchandu

Apologies, this is very noobish of me but how do I increase it?
My only access to this server is through the Linux terminal, which file do I go about editing and adding what you mentioned?

It is running on a Ubuntu Server 20.04 LTS OS

You can do it using curl

likely

curl -X PUT -k -H 'Content-Type: application/json' -u elastic-id-or-similar https://elastic.fqdn:9200/_cluster/settings/ -d '
{
"persistent" : {
"cluster" : {
"max_shards_per_node" : "1500"
}
}
}
'
You would be prompted for password.

Hi Swchandu,

This made our Elk Server work again.

Thank you for the assistance with this. :smiley:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.