Getting MasterNotDiscoveredException: null in client node

We have 3 master nodes, 2 client nodes, and 8 data nodes. And client nodes ES logs show below mentioned error even though resource usage(JVM and CPU ) looks fine at that time. And cluster went unresponsive mode. ALL nodes are up are running during that period.
ES detail:
Version: 7.6
Disk type: st1
Instance type: master:c5.2xlarge, client:r5.4xlarge, data: c5.9xlarge

ES log of client node 1:

i[2021-01-26T17:46:54,024][WARN ][o.e.m.j.JvmGcMonitorService] [client01] [gc][77917] overhead, spent [780ms] collecting in the last [1.5s] [2021-01-26T17:46:58,272][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][77921] overhead, spent [317ms] collecting in the last [1.2s] [2021-01-26T17:47:06,274][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][77929] overhead, spent [299ms] collecting in the last [1s] [2021-01-26T17:47:12,387][WARN ][o.e.t.TransportService ] [client01] Received response for a request that has timed out, sent [18343ms] ago, timed out [8294ms] ago, action [internal:coordination/fault_detection/leader_check], node [{master01}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}], id [3174042] [2021-01-26T17:47:13,637][WARN ][o.e.m.j.JvmGcMonitorService] [client01] [gc][77936] overhead, spent [736ms] collecting in the last [1.3s] [2021-01-26T17:47:35,168][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][77957] overhead, spent [303ms] collecting in the last [1s] [2021-01-26T17:49:18,662][INFO ][o.e.c.s.ClusterApplierService] [client01] removed {{inst-pa-prod-es-india-client02-ieb}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}}, term: 76, version: 81858, reason: ApplyCommitRequest{term=76, version=81858, sourceNode={master01}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}} [2021-01-26T17:49:43,282][INFO ][o.e.c.s.ClusterApplierService] [client01] added {{inst-pa-prod-es-india-client02-ieb}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}}, term: 76, version: 81859, reason: ApplyCommitRequest{term=76, version=81859, sourceNode={master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}} [2021-01-26T17:54:48,527][INFO ][o.e.c.s.ClusterApplierService] [client01] removed {{client02}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{1xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}}, term: 76, version: 81860, reason: ApplyCommitRequest{term=76, version=81860, sourceNode={master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}} [2021-01-26T17:55:09,472][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78411] overhead, spent [371ms] collecting in the last [1s] [2021-01-26T17:55:11,809][WARN ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78413] overhead, spent [976ms] collecting in the last [1.3s] [2021-01-26T17:55:13,001][INFO ][o.e.c.s.ClusterApplierService] [client01] added {{inst-pa-prod-es-india-client02-ieb}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}}, term: 76, version: 81861, reason: ApplyCommitRequest{term=76, version=81861, sourceNode={master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}} [2021-01-26T17:55:13,809][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78415] overhead, spent [483ms] collecting in the last [1s] [2021-01-26T17:55:15,810][WARN ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78417] overhead, spent [535ms] collecting in the last [1s] [2021-01-26T17:55:35,427][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78436] overhead, spent [355ms] collecting in the last [1s] [2021-01-26T17:56:00,427][INFO ][o.e.c.c.Coordinator ] [client01] master node [{inst-pa-prod-es-india-master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}] failed, restarting discovery org.elasticsearch.ElasticsearchException: node [{master01}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}] failed [3] consecutive checks at org.elasticsearch.cluster.coordination.LeaderChecker$CheckScheduler$1.handleException(LeaderChecker.java:277) ~[elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) ~[elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1019) ~[elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [master01][xx.xx.xx.xx:9300][internal:coordination/fault_detection/leader_check] request_id [3325831] timed out after [10007ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1020) ~[elasticsearch-7.6.0.jar:7.6.0] ... 4 more [2021-01-26T17:56:00,429][INFO ][o.e.c.s.ClusterApplierService] [client01] master node changed {previous [{master01}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}], current []}, term: 76, version: 81861, reason: becoming candidate: onLeaderFailure [2021-01-26T17:56:05,235][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:05,243][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:10,429][WARN ][o.e.c.c.ClusterFormationFailureHelper] [client01] master not discovered yet: have discovered [{client01}{Ey-5qinRRxmuWrXcIzpjOw}{8BCI636aRt6hWFkObl4Ang}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}]; discovery will continue using [xx.xx.xx.xx:9300, xx.xx.xx.xx:9300, xx.xx.xx.xx:9300] from hosts providers and [{master02}{4gRHB4GGRaClZsZKgBcUCg}{D4akLWbaROmRKvjfRwL8NQ}{xx.xx.xx.xx}xx.xx.xx.xx:9300}{im}{xpack.installed=true}, {-master03-ieb}{Y1luhxqgQPaKvKNSTkxxbg}{nrwN3SI1RRCvey6xYHZHOQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}] from last-known cluster state; node term 76, last-accepted version 81861 in term 76 [2021-01-26T17:56:15,241][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:15,251][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:17,432][INFO ][o.e.m.j.JvmGcMonitorService] [client01] [gc][78478] overhead, spent [262ms] collecting in the last [1s] [2021-01-26T17:56:20,430][WARN ][o.e.c.c.ClusterFormationFailureHelper] [client01] master not discovered yet: have discovered [{client01}{Ey-5qinRRxmuWrXcIzpjOw}{8BCI636aRt6hWFkObl4Ang}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}]; discovery will continue using [xx.xx.xx.xx:9300, xx.xx.xx.xx:9300, xx.xx.xx.xx:9300] from hosts providers and [{master02-ied}{4gRHB4GGRaClZsZKgBcUCg}{D4akLWbaROmRKvjfRwL8NQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}, {master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}, {master03-ieb}{Y1luhxqgQPaKvKNSTkxxbg}{nrwN3SI1RRCvey6xYHZHOQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}] from last-known cluster state; node term 76, last-accepted version 81861 in term 76 [2021-01-26T17:56:25,248][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:25,259][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] no known master node, scheduling a retry [2021-01-26T17:56:30,430][WARN ][o.e.c.c.ClusterFormationFailureHelper] [client01] master not discovered yet: have discovered [{client01}{Ey-5qinRRxmuWrXcIzpjOw}{8BCI636aRt6hWFkObl4Ang}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}, {master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}]; discovery will continue using [xx.xx.xx.xx:9300, xx.xx.xx.xx:9300, xx.xx.xx.xx:9300] from hosts providers and [{master02-ied}{4gRHB4GGRaClZsZKgBcUCg}{D4akLWbaROmRKvjfRwL8NQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}, {master01-ied}{cPXl0FzHRpK75Odd35s7kw}{995IpeFhR7u-hegScdrpOg}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}, {master03-ieb}{Y1luhxqgQPaKvKNSTkxxbg}{nrwN3SI1RRCvey6xYHZHOQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{im}{xpack.installed=true}] from last-known cluster state; node term 76, last-accepted version 81861 in term 76 [2021-01-26T17:56:35,235][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] timed out while retrying [cluster:monitor/state] after failure (timeout [30s]) [2021-01-26T17:56:35,235][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] timed out while retrying [cluster:monitor/state] after failure (timeout [30s]) [2021-01-26T17:56:35,235][WARN ][r.suppressed ] [client01] path: /_cluster/state/master_node, params: {metric=master_node, org.elasticsearch.discovery.MasterNotDiscoveredException: null at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:220) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] [2021-01-26T17:56:35,236][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] timed out while retrying [cluster:monitor/state] after failure (timeout [30s]) [2021-01-26T17:56:35,236][WARN ][r.suppressed ] [client01] path: /_cluster/state/master_node, params: {metric=master_node, org.elasticsearch.discovery.MasterNotDiscoveredException: null at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$2.onTimeout(TransportMasterNodeAction.java:220) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:325) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:252) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:598) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] [2021-01-26T17:56:35,236][DEBUG][o.e.a.s.m.TransportMasterNodeAction] [client01] timed out while retrying [cluster:monitor/state] after failure (timeout [30s])

ES log of master01:
[2021-01-26T17:26:00,958][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [master01] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200, remoteAddress=/xx.xx.xx.xx:55690} [2021-01-26T17:49:18,660][INFO ][o.e.c.s.MasterService ] [master01] node-left[{client02-ieb}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true} reason: followers check retry count exceeded], term: 76, version: 81858, delta: removed {{client02-ieb}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}} [2021-01-26T17:49:18,672][INFO ][o.e.c.s.ClusterApplierService] [master01] removed {{client02}{mDGTwIuMRUWI3I41s2sRgQ}{xYMTwJhLTVS8FlBE3vOjnQ}{xx.xx.xx.xx}{xx.xx.xx.xx:9300}{i}{xpack.installed=true}}, term: 76, version: 81858, reason: Publication{term=76, version=81858} [2021-01-26T17:49:18,675][WARN ][r.suppressed ] [master01] path: /_enrich/_stats, params: {} org.elasticsearch.action.FailedNodeException: Failed node [mDGTwIuMRUWI3I41s2sRgQ] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:221) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:142) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:196) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$8.run(TransportService.java:980) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [client02-ieb][xx.xx.xx.xx:9300][cluster:admin/xpack/enrich/coordinator_stats[n]] disconnected [2021-01-26T17:49:18,679][DEBUG][o.e.a.a.c.s.TransportClusterStatsAction] [master01] failed to execute on node [mDGTwIuMRUWI3I41s2sRgQ] org.elasticsearch.transport.NodeDisconnectedException: [client02][xx.xx.xx.xx:9300][cluster:monitor/stats[n]] disconnected [2021-01-26T17:49:18,680][WARN ][r.suppressed ] [master01] path: /_enrich/_stats, params: {} org.elasticsearch.action.FailedNodeException: Failed node [mDGTwIuMRUWI3I41s2sRgQ] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:221) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:142) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:196) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$8.run(TransportService.java:980) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [client02][xx.xx.xx.xx:9300][cluster:admin/xpack/enrich/coordinator_stats[n]] disconnected [2021-01-26T17:49:18,680][DEBUG][o.e.a.a.c.s.TransportClusterStatsAction] [master01] failed to execute on node [mDGTwIuMRUWI3I41s2sRgQ] org.elasticsearch.transport.NodeDisconnectedException: [client02][xx.xx.xx.xx:9300][cluster:monitor/stats[n]] disconnected [2021-01-26T17:49:18,681][WARN ][r.suppressed ] [master01] path: /_enrich/_stats, params: {} org.elasticsearch.action.FailedNodeException: Failed node [mDGTwIuMRUWI3I41s2sRgQ] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:221) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$100(TransportNodesAction.java:142) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:196) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1118) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.transport.TransportService$8.run(TransportService.java:980) [elasticsearch-7.6.0.jar:7.6.0] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:633) [elasticsearch-7.6.0.jar:7.6.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_232] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_232] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232] Caused by: org.elasticsearch.transport.NodeDisconnectedException: [client02][xx.xx.xx.xx:9300][cluster:admin/xpack/enrich/coordinator_stats[n]] disconnected [2021-01-26T17:49:27,021][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [master01] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=0.0.0.0/0.0.0.0:9200, remoteAddress=/xx.xx.xx.xx:36224}

Unless all data fits in the operating system page cache and is not updated, Elasticsearch is often limited by disk I/O performance. It seems like you are using quite powerful nodes together with very slow storage, which seems like an unusual combination. Can you please describe the use case and how the cluster is configured?

We have performed search operation only(i.e read-only) and requests are multiples at a time. Query contains terms and aggregations and requests are submitted via client nodes. The sample yml file of client node is given below:

`cluster.name: xxxx

cluster.fault_detection.follower_check.timeout: 30s
cluster.fault_detection.follower_check.retry_count: 3
node.name: xxxxx
node.master: false
node.data: false
path.data: /opt/es1
path.logs: /eslogs
discovery.seed_hosts: ["xx.xx.xx.xx","xx.xx.xx.xx","xx.xx.xx.xx"]
discovery.zen.minimum_master_nodes: 3
network.host: ["localhost", "::1", "xx.xx.xx.xx"]
bootstrap.memory_lock: true
transport.compress: true
http.port: 9200
action.destructive_requires_name: true
indices.memory.index_buffer_size: 42%
xpack.license.self_generated.type: basic
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: false
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /etc/elasticsearch/xyz.certificate/private.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/xyz.certificate/primary.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/xyz.certificate/intermediate.crt" ]
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: /etc/elasticsearch/xyz.certificate/private.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/xyz.certificate/primary.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/xyz.certificate/intermediate.crt" ]
xpack.ml.enabled: false`

Any suggestions for this kind of issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.