Why my elk always report an error "{"statusCode":503,"error":"Service Unavailable","message":"License is not available."}"

I made a ELK system,but there offen have a error,the kibana html report:{"statusCode":503,"error":"Service Unavailable","message":"License is not available."}
I don't know how to find the reason,so i had restart the services of elasticsearch on all of 3 nodes of ELK.if when i done these ,I can open the page and login the elasic ,but not long ago, the error occurred again.
I am a newer for ELK and IT,so I don't know hwo to find the reason and solve this problem.
The node logs:
node1:

[2023-05-08T15:21:10,853][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999884}{false}{false}{false}] of size [541] on [Netty4TcpChannel{localAddress=/10.8.100.111:55868, remoteAddress=10.8.100.112/10.8.100.112:9300, profile=default}] took [26960ms] which is above the warn threshold of [5000ms] with success [true]
[2023-05-08T15:21:10,853][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999885}{false}{false}{false}] of size [541] on [Netty4TcpChannel{localAddress=/10.8.100.111:39124, remoteAddress=10.8.100.113/10.8.100.113:9300, profile=default}] took [26960ms] which is above the warn threshold of [5000ms] with success [true]
[2023-05-08T15:21:15,354][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999890}{false}{false}{false}] of size [533] on [Netty4TcpChannel{localAddress=/10.8.100.111:55856, remoteAddress=10.8.100.112/10.8.100.112:9300, profile=default}] took [31459ms] which is above the warn threshold of [5000ms] with success [true]
[2023-05-08T15:21:33,259][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3641}{8.1.0}{11999196}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@797269d2/org.elasticsearch.action.support.RetryableAction$RetryingListener@3f48ee1] took [35722ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:21:33,259][WARN ][o.e.h.AbstractHttpServerTransport] [es01] handling request [unknownId][GET][/.kibana_8.1.2/_doc/telemetry%3Atelemetry][Netty4HttpChannel{localAddress=/10.8.100.111:9200, remoteAddress=/10.8.100.111:33146}] took [31272ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:21:55,872][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3648}{8.1.0}{11998440}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@a798159/org.elasticsearch.action.support.RetryableAction$RetryingListener@436c13d5] took [45000ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:22:00,558][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3648}{8.1.0}{11995646}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@1a3a1101/org.elasticsearch.action.support.RetryableAction$RetryingListener@66997703] took [63028ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:21:55,878][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [es01] GC did not bring memory usage down, before [25094589056], after [25100368760], allocations [1], duration [76482]
[2023-05-08T15:22:05,107][WARN ][o.e.h.AbstractHttpServerTransport] [es01] handling request [unknownId][POST][/.kibana_8.1.2/_search?rest_total_hits_as_int=true][Netty4HttpChannel{localAddress=/10.8.100.111:9200, remoteAddress=/10.8.100.111:33158}] took [85712ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:22:14,359][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [es01] attempting to trigger G1GC due to high heap usage [25097712560]
[2023-05-08T15:22:28,479][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3642}{8.1.0}{11999211}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@64cb2a01/org.elasticsearch.action.support.RetryableAction$RetryingListener@14034386] took [23361ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:23:31,575][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3648}{8.1.0}{11995700}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@3e400c01/org.elasticsearch.action.support.RetryableAction$RetryingListener@94ad106] took [86456ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:23:31,581][WARN ][o.e.t.InboundHandler     ] [es01] handling response [InboundMessage{Header{3650}{8.1.0}{11999229}{false}{true}{false}{false}{NO_ACTION_NAME_FOR_RESPONSES}}] on handler [org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler/org.elasticsearch.transport.TransportService$4/[indices:admin/seq_no/retention_lease_background_sync[r]]:org.elasticsearch.action.ActionListenerResponseHandler@1de0b5c1/org.elasticsearch.action.support.RetryableAction$RetryingListener@4dcfdc31] took [63101ms] which is above the warn threshold of [5000ms]
[2023-05-08T15:23:31,581][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999894}{false}{false}{false}] of size [531] on [Netty4TcpChannel{localAddress=/10.8.100.111:55834, remoteAddress=10.8.100.112/10.8.100.112:9300, profile=default}] took [154045ms] which is above the warn threshold of [5000ms] with success [true]
[2023-05-08T15:23:31,581][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999897}{false}{false}{false}] of size [536] on [Netty4TcpChannel{localAddress=/10.8.100.111:39082, remoteAddress=10.8.100.113/10.8.100.113:9300, profile=default}] took [140716ms] which is above the warn threshold of [5000ms] with success [true]
[2023-05-08T15:23:31,581][WARN ][o.e.t.OutboundHandler    ] [es01] sending transport message [Request{indices:admin/seq_no/retention_lease_background_sync[r]}{11999932}{false}{false}{false}] of size [537] on [Netty4TcpChannel{localAddress=/10.8.100.111:39082, remoteAddress=10.8.100.113/10.8.100.113:9300, profile=default}] took [77222ms] which is above the warn threshold of [5000ms] with success [true]

node2:


[2023-05-08T15:14:26,358][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-switch-2022.06.17-2022.07.16-000002][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,364][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-switch-2022.07.27-2022.07.26-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,367][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-msitelogs-log-2023.02.19-2023.02.18-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,370][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-switch-2022.10.16-2022.10.15-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,373][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-firewall-2022.05.26-2022.07.04-000002][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,377][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-msitelogs-host-proctop10-2022.11.12-2022.11.11-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,382][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.08.22-2022.08.21-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,385][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-msitelogs-host-pingconnect-2023.03.18-2023.03.17-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,391][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-msitelogs-ora-resources-2022.10.25-2022.10.24-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,396][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.04.17-2022.07.16-000007][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,399][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.08.16-2022.08.15-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,404][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-msitelogs-host-diskusage-2022.12.13-2022.12.12-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,425][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.07.28-2022.07.27-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,433][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.06.10-2022.07.09-000002][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,439][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.10.05-2022.10.04-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,442][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.09.18-2022.09.17-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,446][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.10.05-2022.10.04-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,448][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.10.09-2022.10.08-000001][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,452][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.09.12-2022.09.11-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,457][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.10.30-2022.10.29-000001][1] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,459][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.08.16-2022.08.15-000001][1] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,462][INFO ][o.e.i.s.IndexShard       ] [es02] [.apm-custom-link][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,468][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.09.16-2022.09.15-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,493][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.05.27-2022.07.04-000002][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,495][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-balance-2022.07.09-2022.07.08-000001][1] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,504][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-ac-2022.05.19-2022.07.04-000002][1] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,508][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-vpn-2022.09.05-2022.09.04-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,510][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-switch-2022.07.13-2022.07.12-000001][1] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,517][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-netlogs-bgw-wireless-2022.10.14-2022.10.13-000001][2] primary-replica resync completed with 0 operations
[2023-05-08T15:14:26,798][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-metrics-system.socket_summary-default-2023.05.06-000026][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:27,081][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-metrics-system.network-default-2023.05.06-000026][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:27,280][INFO ][o.e.i.s.IndexShard       ] [es02] [.ds-metrics-system.process-default-2023.05.06-000026][0] primary-replica resync completed with 0 operations
[2023-05-08T15:14:34,716][INFO ][o.e.x.t.t.TransformTask  ] [es02] [endpoint.metadata_united-default-8.2.0] updating state for transform to [{"task_state":"started","indexer_state":"stopped","checkpoint":3052715,"progress":{"docs_indexed":0,"docs_processed":0},"should_stop_at_checkpoint":false}].
[2023-05-08T15:14:35,239][INFO ][o.e.x.t.t.TransformPersistentTasksExecutor] [es02] [endpoint.metadata_united-default-8.2.0] successfully completed and scheduled task in node operation
[2023-05-08T15:22:36,583][INFO ][o.e.m.j.JvmGcMonitorService] [es02] [gc][15880] overhead, spent [277ms] collecting in the last [1s]
[2023-05-08T15:22:46,600][INFO ][o.e.m.j.JvmGcMonitorService] [es02] [gc][15890] overhead, spent [323ms] collecting in the last [1s]
[2023-05-08T15:23:59,833][INFO ][o.e.m.j.JvmGcMonitorService] [es02] [gc][15963] overhead, spent [289ms] collecting in the last [1s]
[2023-05-08T15:24:45,033][INFO ][o.e.m.j.JvmGcMonitorService] [es02] [gc][16008] overhead, spent [351ms] collecting in the last [1s]
[2023-05-08T15:24:57,101][INFO ][o.e.m.j.JvmGcMonitorService] [es02] [gc][16020] overhead, spent [288ms] collecting in the last [1s]

node 3:

root@es03:~# tail -n 50 /var/log/elasticsearch/elasticsearch.log
[2023-05-08T15:25:24,578][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.10.30-2022.10.29-000001][0] marking unavailable shards as stale: [7Xn-ofMTQBe61FbuoFojiw]
[2023-05-08T15:25:25,763][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-firewall-2022.10.30-2022.10.29-000001][1] marking unavailable shards as stale: [berh2VzURQiUioAlQACU8g]
[2023-05-08T15:25:25,764][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-service-2022.12.09-2022.12.08-000001][0] marking unavailable shards as stale: [yi8tcFO5SbaEjLBGT7IR4Q]
[2023-05-08T15:25:26,397][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-service-2022.10.30-2022.10.29-000001][0] marking unavailable shards as stale: [BPnQdtxHTYS5rHaD8AN_ew]
[2023-05-08T15:25:27,653][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.12.09-2022.12.08-000001][0] marking unavailable shards as stale: [Sdf7SWHuTeaqJOfpTigDcQ]
[2023-05-08T15:25:28,940][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-proctop10-2022.10.30-2022.10.29-000001][0] marking unavailable shards as stale: [yF84pnG0Sk2bJTt-cZrjyQ]
[2023-05-08T15:25:28,941][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-diskusage-2022.10.30-2022.10.29-000001][0] marking unavailable shards as stale: [XW43vM1VRsGR6saYINO5ng]
[2023-05-08T15:25:28,941][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-diskusage-2022.12.09-2022.12.08-000001][0] marking unavailable shards as stale: [ewvBAhZjRI2ayW8h7h1rRg]
[2023-05-08T15:25:29,988][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-log-2022.12.09-2022.12.08-000001][0] marking unavailable shards as stale: [t_EIvK5eRdiM0ScXtZyBIw]
[2023-05-08T15:25:30,973][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-vpn-2022.10.30-2022.10.29-000001][1] marking unavailable shards as stale: [PUPLin3bT7OeBLr39L5suw]
[2023-05-08T15:25:30,974][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-switch-2022.10.30-2022.10.29-000001][0] marking unavailable shards as stale: [kle8hwA-RYW3zju_pTWHlg]
[2023-05-08T15:25:30,974][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.12.08-2022.12.07-000001][0] marking unavailable shards as stale: [v7wzOigERimEgKcC74xbnA]
[2023-05-08T15:25:31,961][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-ora-resources-2022.12.06-2022.12.05-000001][0] marking unavailable shards as stale: [KEHUysFcTleGuJu-DQhh4A]
[2023-05-08T15:25:32,476][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-balance-2022.10.29-2022.10.28-000001][0] marking unavailable shards as stale: [tkrJY7mWSIquQLlL0CGQZA]
[2023-05-08T15:25:33,592][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-wireless-2022.10.30-2022.10.29-000001][2] marking unavailable shards as stale: [lwz1D-g5StSkeLAFL4iwfw]
[2023-05-08T15:25:33,593][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.12.06-2022.12.05-000001][0] marking unavailable shards as stale: [rIW97mGiRGGfnR7Cs3k4Sg]
[2023-05-08T15:25:34,157][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-firewall-2022.10.29-2022.10.28-000001][1] marking unavailable shards as stale: [07llaSAWRLejvEqexbEG1A]
[2023-05-08T15:25:35,142][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-proctop10-2022.12.05-2022.12.04-000001][0] marking unavailable shards as stale: [Nbe06beNSMWxpqUwMtIyeQ]
[2023-05-08T15:25:36,147][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-vpn-2022.10.29-2022.10.28-000001][1] marking unavailable shards as stale: [13f_HWUXRH-zdBXk8HkAgg]
[2023-05-08T15:25:36,148][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.12.05-2022.12.04-000001][0] marking unavailable shards as stale: [nReqPUWiQSiL87zBAVs1IQ]
[2023-05-08T15:25:36,148][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-pingconnect-2022.10.29-2022.10.28-000001][0] marking unavailable shards as stale: [BSDNiQMdSPucJJRhM-Ku8g]
[2023-05-08T15:25:37,173][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-host-diskusage-2022.12.05-2022.12.04-000001][0] marking unavailable shards as stale: [tE6TmDFEQQmta1mxnJHSfQ]
[2023-05-08T15:25:37,789][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-switch-2022.10.29-2022.10.28-000001][1] marking unavailable shards as stale: [e3ejqMCJRPWiOZx0qM7-0Q]
[2023-05-08T15:25:38,822][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-netlogs-bgw-wireless-2022.10.29-2022.10.28-000001][1] marking unavailable shards as stale: [xLet4pWTT0SkBzG11g9LvQ]
[2023-05-08T15:25:38,823][WARN ][o.e.c.r.a.AllocationService] [es03] [.ds-msitelogs-ora-resources-2022.12.03-2022.12.02-000001][0] marking unavailable shards as stale: [eqoFtCjYR4ycYSzloqwgjg]

The cluster/health?


{
  "cluster_name" : "elasticsearch",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 2841,
  "active_shards" : 2842,
  "relocating_shards" : 0,
  "initializing_shards" : 8,
  "unassigned_shards" : 6650,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 2697,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 139726,
  "active_shards_percent_as_number" : 29.915789473684214

Hi @maf_77,

Welcome to the community! Looking at the output of the cluster/health API it looks like your cluster is in read state, which means you have unassigned shards, which looks to be right given the logs output on node 3.

Can you share the output of the explain API as covered in the docs which may explain what is going on?

There are a few reasons why you have so many unassigned shards or unassigned primary shards such as disk or network issues. These resources may also help you with diagnosing the problem:

  1. RED Elasticsearch Cluster? Panic no longer
  2. Cluster allocation explain API
  3. Elasticsearch Red Status

Let us know how you get on!

Given the number of nodes in your cluster you have a very large number of shards (> 3000 per node), a lot more than what is generally recommended. I would not be surprised if this is causing problems.

There are improvements in later versions around handling large number of shards, so I would recommend upgrading as soon as possible while also looking for ways to reduce the number of shards in the cluster.

1 Like

sorry, i don't know that how to find the output of the explain API as covered in the docs,I am leanning the ELK.

Hi @maf_77,

No worries at all! You can run the examples from the documentation within the Dev Tools console, or alternatively using HTTP requests against your Elasticsearch endpoint via curl:

curl -X GET "localhost:9200/_cluster/allocation/explain?pretty" -H 'Content-Type: application/json' -d' { "index": "my-index-000001", "shard": 0, "primary": false, "current_node": "my-node" } '

I would definitely take @Christian_Dahlqvist's advice regarding upgrading and reducing the number of shards in your cluster.

Hope that helps!

i run the request:

GET /_cluster/allocation/explain?pretty

and the system reply:

{
  "note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
  "index" : ".kibana-event-log-8.1.2-000013",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2023-05-12T01:50:12.202Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "yes",
  "allocate_explanation" : "can allocate the shard",
  "target_node" : {
    "id" : "MppC2PkTQTy8Ujk8KhgvUQ",
    "name" : "es03",
    "transport_address" : "10.8.100.113:9300",
    "attributes" : {
      "ml.machine_memory" : "50533294080",
      "xpack.installed" : "true",
      "ml.max_jvm_size" : "25266487296"
    }
  },
  "node_allocation_decisions" : [
    {
      "node_id" : "0L83K1e-SjiN5c6qjMHR8g",
      "node_name" : "es01",
      "transport_address" : "10.8.100.111:9300",
      "node_attributes" : {
        "ml.machine_memory" : "50533285888",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "25266487296"
      },
      "node_decision" : "yes"
    },
    {
      "node_id" : "MppC2PkTQTy8Ujk8KhgvUQ",
      "node_name" : "es03",
      "transport_address" : "10.8.100.113:9300",
      "node_attributes" : {
        "ml.machine_memory" : "50533294080",
        "xpack.installed" : "true",
        "ml.max_jvm_size" : "25266487296"
      },
      "node_decision" : "yes",
      "store" : {
        "matching_size_in_bytes" : 37983
      }
    },
    {
      "node_id" : "DEKP7e7xTsy3tkSnEuH4Cg",
      "node_name" : "es02",
      "transport_address" : "10.8.100.112:9300",
      "node_attributes" : {
        "ml.machine_memory" : "50533294080",
        "ml.max_jvm_size" : "25266487296",
        "xpack.installed" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "matching_size_in_bytes" : 38734
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[.kibana-event-log-8.1.2-000013][0], node[DEKP7e7xTsy3tkSnEuH4Cg], [P], s[STARTED], a[id=U6oXu-V5TpSMnBP-ITTrwQ]]"
        }
      ]
    }
  ]
}

and the other commands i don't know how to run,

so i am leanning how to " regarding upgrading and reducing the number of shards"

The error message you provided indicates that your ELK (Elasticsearch, Logstash, Kibana) stack is reporting a "Service Unavailable" error with the message "License is not available." This error commonly occurs when you are using a version of Elasticsearch that requires a license to enable certain features or functionality.

Here are a few possible reasons and steps you can take to address this issue:

Check Elasticsearch version: Verify the version of Elasticsearch you are using. Elasticsearch has different license levels, such as the Basic (free) and the Platinum (paid) licenses. Certain features, like Security or Alerting, may require a paid license. If you are using a version that requires a license for specific features and you don't have a valid license, those features may be disabled, resulting in the "License is not available" error.

Review Elasticsearch license status: Check the license status of your Elasticsearch cluster. You can do this by accessing the Elasticsearch API or the Elasticsearch cluster settings. Verify if a license is installed, expired, or missing altogether. If the license has expired or is missing, you may need to obtain and install a valid license to enable the desired features.

Upgrade or downgrade Elasticsearch version: If you are using a version of Elasticsearch that requires a paid license for certain features, you can consider downgrading to a version that provides those features under the free license (such as the Basic license). Alternatively, you may choose to upgrade to a version that includes the desired features under the free license.

Evaluate Elasticsearch alternatives: If you require specific features that are only available with a paid license and it is not feasible for you to obtain or upgrade to a paid license, you could consider exploring alternative open-source solutions that provide similar functionality.

Seek Elasticsearch support: If you believe you should have a valid license or need assistance with obtaining or managing licenses, it is recommended to reach out to the Elasticsearch support team or consult their documentation for further guidance. They can provide more specific information and help you resolve any licensing issues.

Remember to provide relevant details, such as the Elasticsearch version, license status, and any error logs or messages, when seeking support. This will assist the support team in understanding the problem and offering appropriate solutions.

Please note that the information provided here is based on general knowledge of Elasticsearch, and it's important to consult the official Elasticsearch documentation or contact their support for the most accurate and up-to-date information regarding licensing and specific error messages.

I hope this helps you understand and address the "License is not available" error in your ELK stack.

Regards,
Rachel Gomez

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.