Kibana not starting & Cluster Status Yellow

Hi guys,

Could you help me, why my kibana not starting, and im check the cluster status Yellow?

Collect in elastic Log :
[2024-01-14T22:53:17,827][INFO ][o.e.i.m.MapperService ] [Desktop] [.kibana-observability-ai-assistant-conversations-000001] reloading search analyzers
[2024-01-14T22:53:17,866][INFO ][o.e.i.m.MapperService ] [Desktop] [.ds-.kibana-event-log-ds-2024.01.11-000001] reloading search analyzers
[2024-01-14T22:53:17,890][INFO ][o.e.i.m.MapperService ] [Desktop] [.slo-observability.summary-v2] reloading search analyzers
[2024-01-14T22:53:18,156][INFO ][o.e.i.m.MapperService ] [Desktop] [.apm-source-map] reloading search analyzers
[2024-01-14T22:53:18,257][INFO ][o.e.c.r.a.AllocationService] [Desktop] current.health="YELLOW" message="Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.apm-source-map][0]]])." previous.health="RED" reason="shards started [[.apm-source-map][0]]"
[2024-01-14T22:54:10,102][INFO ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][65] overhead, spent [435ms] collecting in the last [1s]
[2024-01-14T22:54:15,683][WARN ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][70] overhead, spent [998ms] collecting in the last [1.4s]
[2024-01-14T22:54:51,030][INFO ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][105] overhead, spent [329ms] collecting in the last [1s]
[2024-01-14T22:58:08,699][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.elastic_agent-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-12T11:56:33.000Z] to [2024-01-15T09:03:07.000Z]
[2024-01-14T22:58:08,754][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.fleet_server-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-12T11:56:33.000Z] to [2024-01-15T09:03:07.000Z]
[2024-01-14T23:00:11,580][INFO ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][422] overhead, spent [402ms] collecting in the last [1s]
[2024-01-14T23:03:07,635][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.elastic_agent-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:03:07.000Z] to [2024-01-15T09:08:07.000Z]
[2024-01-14T23:03:07,692][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.fleet_server-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:03:07.000Z] to [2024-01-15T09:08:07.000Z]
[2024-01-14T23:05:03,921][WARN ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][young][711][26] duration [1.7s], collections [1]/[1.9s], total [1.7s]/[3.8s], memory [1.4gb]->[169.1mb]/[2.4gb], all_pools {[young] [1.2gb]->[0b]/[0b]}{[old] [129.9mb]->[129.9mb]/[2.4gb]}{[survivor] [31.9mb]->[39.2mb]/[0b]}
[2024-01-14T23:05:03,963][WARN ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][711] overhead, spent [1.7s] collecting in the last [1.9s]
[2024-01-14T23:05:09,617][WARN ][o.e.m.j.JvmGcMonitorService] [Desktop] [gc][715] overhead, spent [2.5s] collecting in the last [2.6s]
[2024-01-14T23:08:07,566][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.elastic_agent-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:08:07.000Z] to [2024-01-15T09:13:07.000Z]
[2024-01-14T23:08:07,568][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.fleet_server-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:08:07.000Z] to [2024-01-15T09:13:07.000Z]
[2024-01-14T23:13:07,492][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.elastic_agent-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:13:07.000Z] to [2024-01-15T09:18:07.000Z]
[2024-01-14T23:13:07,526][INFO ][o.e.c.s.IndexScopedSettings] [Desktop] [.ds-metrics-elastic_agent.fleet_server-default-2024.01.11-000001] updating [index.time_series.end_time] from [2024-01-15T09:13:07.000Z] to [2024-01-15T09:18:07.000Z]

Hi,

To investigate the issue:
GET _cluster/allocation/explain?filter_path=index,shard,primary,**.node_name,**.node_decision,**.decider,**.decision,**.*explanation,**.unassigned_info,**.*delay

Then the logs you provided indicate that your Elasticsearch JVM is spending a significant amount of time on garbage collection. Try to increase the heap size allocated to Elasticsearch.

Regards

Hi Yago,

For investigate the issue, can you explain with gui. because i'm use with os windows.

Otherwise for increase the heap size allocated edit on Folder Config --> jvm.option, right? default ## -Xms4g , ## -Xmx4g

Regard's

Hi,

you have to run it in DevTools for example.

For the heap, yes, you're correct. To increase the heap size allocated to Elasticsearch, you would need to edit the jvm.options file, the maximum heap size should not be set to more than 50% of your machine's total RAM, and it should not exceed 32 GB.

Regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.