Kibana server is not ready yet

Hi all. I have been running ELK for a few months now version 7.12.0. I tried to log on today with username elastic and my browser saved password and it would not let me. I restarted the server but I cannot get to the log on screen now. I just get 'Kibana server is not ready yet' or sometimes
I have tried restarting both services (elasticsearch and kibana) both are running confirmed with

sudo service kibana status
sudo service elasticsearch status

The only thing I can think of is 95% of the disk is in use.

{"type":"log","@timestamp":"2021-07-26T12:04:18+00:00","tags":["info","plugins-system"],"pid":23806,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-07-26T12:04:18+00:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":23806,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-07-26T12:04:18+00:00","tags":["info","savedobjects-service"],"pid":23806,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-07-26T12:04:18+00:00","tags":["info","savedobjects-service"],"pid":23806,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE"}
{"type":"log","@timestamp":"2021-07-26T12:04:18+00:00","tags":["info","savedobjects-service"],"pid":23806,"message":"[.kibana_task_manager] Migration completed after 770ms"}
{"type":"log","@timestamp":"2021-07-26T12:04:47+00:00","tags":["warning","plugins","licensing"],"pid":23806,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-07-26T12:04:48+00:00","tags":["warning","plugins-system"],"pid":23806,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}
{"type":"log","@timestamp":"2021-07-26T12:11:20+00:00","tags":["warning","environment"],"pid":"25411","path":"/run/kibana/kibana.pid","message":"pid file already exists at /run/kibana/kibana.pid"}
root@kibana:/etc/elasticsearch#
{"type":"log","@timestamp":"2021-07-26T13:30:01+01:00","tags":["fatal","root"],"pid":4736,"message":"Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Error: [cluster_block_exception]: index [.kibana_7.12.0_001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];\n    at migrationStateActionMachine (/usr/share/kibana/src/core/server/saved_objects/migrationsv2/migrations_state_action_machine.js:139:13)\n    at processTicksAndRejections (internal/process/task_queues.js:93:5)\n    at async Promise.all (index 0)\n    at SavedObjectsService.start (/usr/share/kibana/src/core/server/saved_objects/saved_objects_service.js:163:7)\n    at Server.start (/usr/share/kibana/src/core/server/server.js:283:31)\n    at Root.start (/usr/share/kibana/src/core/server/root/index.js:58:14)\n    at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:100:5)\n    at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:169:5)"}
{"type":"log","@timestamp":"2021-07-26T13:30:01+01:00","tags":["info","plugins-system"],"pid":4736,"message":"Stopping all plugins."}
{"type":"log","@timestamp":"2021-07-26T13:30:01+01:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":4736,"message":"Monitoring stats collection is stopped"}
{"type":"log","@timestamp":"2021-07-26T13:30:01+01:00","tags":["info","savedobjects-service"],"pid":4736,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
{"type":"log","@timestamp":"2021-07-26T13:30:02+01:00","tags":["info","savedobjects-service"],"pid":4736,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE"}
{"type":"log","@timestamp":"2021-07-26T13:30:02+01:00","tags":["info","savedobjects-service"],"pid":4736,"message":"[.kibana_task_manager] Migration completed after 677ms"}

What do I need to do next to diagnose why it isn't working?

can you share a full log output please? Thank you!

How do you want this? Kibana.log is 6gb

":{"term":{"migrationVersion.search-telemetry":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"visualization"}},"must_not":{"term":{"migrationVersion.visualization":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"canvas-workpad"}},"must_not":{"term":{"migrationVersion.canvas-workpad":"7.0.0"}}}},{"bool":{"must":{"term":{"type":"graph-workspace"}},"must_not":{"term":{"migrationVersion.graph-workspace":"7.11.0"}}}},{"bool":{"must":{"term":{"type":"dashboard"}},"must_not":{"term":{"migrationVersion.dashboard":"7.11.0"}}}},{"bool":{"must":{"term":{"type":"search"}},"must_not":{"term":{"migrationVersion.search":"7.9.3"}}}},{"bool":{"must":{"term":{"type":"space"}},"must_not":{"term":{"migrationVersion.space":"6.6.0"}}}},{"bool":{"must":{"term":{"type":"map"}},"must_not":{"term":{"migrationVersion.map":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"lens"}},"must_not":{"term":{"migrationVersion.lens":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"exception-list-agnostic"}},"must_not":{"term":{"migrationVersion.exception-list-agnostic":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"exception-list"}},"must_not":{"term":{"migrationVersion.exception-list":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"ingest_manager_settings"}},"must_not":{"term":{"migrationVersion.ingest_manager_settings":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"fleet-agents"}},"must_not":{"term":{"migrationVersion.fleet-agents":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"fleet-agent-actions"}},"must_not":{"term":{"migrationVersion.fleet-agent-actions":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"fleet-agent-events"}},"must_not":{"term":{"migrationVersion.fleet-agent-events":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"ingest-agent-policies"}},"must_not":{"term":{"migrationVersion.ingest-agent-policies":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"fleet-enrollment-api-keys"}},"must_not":{"term":{"migrationVersion.fleet-enrollment-api-keys":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"ingest-package-policies"}},"must_not":{"term":{"migrationVersion.ingest-package-policies":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"action"}},"must_not":{"term":{"migrationVersion.action":"7.11.0"}}}},{"bool":{"must":{"term":{"type":"alert"}},"must_not":{"term":{"migrationVersion.alert":"7.11.2"}}}},{"bool":{"must":{"term":{"type":"ml-job"}},"must_not":{"term":{"migrationVersion.ml-job":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"siem-detection-engine-rule-actions"}},"must_not":{"term":{"migrationVersion.siem-detection-engine-rule-actions":"7.11.2"}}}},{"bool":{"must":{"term":{"type":"endpoint:user-artifact-manifest"}},"must_not":{"term":{"migrationVersion.endpoint:user-artifact-manifest":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"cases-comments"}},"must_not":{"term":{"migrationVersion.cases-comments":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"cases-configure"}},"must_not":{"term":{"migrationVersion.cases-configure":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"cases"}},"must_not":{"term":{"migrationVersion.cases":"7.12.0"}}}},{"bool":{"must":{"term":{"type":"cases-user-actions"}},"must_not":{"term":{"migrationVersion.cases-user-actions":"7.10.0"}}}},{"bool":{"must":{"term":{"type":"infrastructure-ui-source"}},"must_not":{"term":{"migrationVersion.infrastructure-ui-source":"7.9.0"}}}}]}},"retryCount":0,"retryDelay":0,"logs":[],"sourceIndex":{"_tag":"None"},"targetIndex":".kibana_7.12.0_001","versionIndexReadyActions":{"_tag":"None"},"outdatedDocuments":[],"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS"}
{"type":"log","@timestamp":"2021-07-26T18:57:28+01:00","tags":["warning","plugins","licensing"],"pid":86006,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
{"type":"log","@timestamp":"2021-07-26T18:57:28+01:00","tags":["warning","plugins-system"],"pid":86006,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}



right, I don't know how but it is back on. I exapanded the drive although for the first 15 minutes that had no impact then all of a sudden it came back on.

One of these indices was about 700-800gb last time I checked. I am assuming it is the red one. Is this the cause of my issue. I thought I had set it to delete after 7 days.

so I went ahead and deleted that index which freed up circa 600gb but I can now see a new winlogbeat file growing again at a really fast rate (500mb after 10 minutes). I only have my DC's sending their logs accross (5 DC's).
I have added a lifecycle policy so we will see what happens.

Is that the disk that Elasticsearch is on? If so that'd likely be causing issues and you will want to free up some space.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.