Bonjour tous,
j'ai monter un cluster Elasticsearch
2 sont dans un DC (node-1 & 2) et 1 dans un autre DC (node-3)
Voici la version de Elasticsearch sur mes nodes
la communication entre eux se fais en HTTPS
{
"name" : "node-01",
"cluster_name" : "cluster",
"cluster_uuid" : "//////////////////////",
"version" : {
"number" : "6.8.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "65b////",
"build_date" : "2019-05-15T20:06:13.172855Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
{
"name" : "node-02",
"cluster_name" : "cluster",
"cluster_uuid" : "//////////////////////",
"version" : {
"number" : "6.8.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "65b////",
"build_date" : "2019-05-15T20:06:13.172855Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
{
"name" : "node-03",
"cluster_name" : "cluster",
"cluster_uuid" : "//////////////////////",
"version" : {
"number" : "6.8.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "65b////",
"build_date" : "2019-05-15T20:06:13.172855Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
j'ai pas mal de problème récurent dans les logs et j'aimerais ne plus avoir de [WARN] pour avoir un cluster propre et deplus pour ne pas pollué la supervision
voici les Erreurs récurrentes
pour le premier node :
[2020-01-31T13:43:01,318][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.31][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,319][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.30][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,320][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][2]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,321][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,322][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][3]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,323][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.29][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,324][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-es-6-2020.01.28][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,324][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][1]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,325][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][4]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,326][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES][2]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,331][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-es-6-2020.01.27][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,332][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-es-6-2020.01.26][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,333][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.26][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,334][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-es-6-2020.01.25][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,335][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.25][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,336][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-kibana-6-2020.01.24][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,337][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.monitoring-es-6-2020.01.24][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,338][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][3]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,339][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][2]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,340][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,350][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [archivage-config][3]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,351][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.kibana_1][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,352][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [.kibana_task_manager][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,355][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,348][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][1]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,349][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][4]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,369][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [nom d'un index sur ES_03][2]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,371][WARN ][o.e.g.G.InternalReplicaShardAllocator] [Node-1] [searchguard][0]: failed to list shard for shard_store on node [l4lQuRv6e-BaHsOTQ]
[2020-01-31T13:43:01,465][WARN ][o.e.d.z.PublishClusterStateAction] [Node-1] publishing cluster state with version [46646] failed for the following nodes: [[{Node-3}{l4lQuRv6e-BaHsOTQ}{tx8Yiai7QZalfilMPzPj9g}{192.168.112.173}{192.168.112.173:9300}{ml.machine_memory=8339591168, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]]
pour le 2eme nodes
[2020-01-31T13:41:26,401][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:41:38,959][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:41:46,505][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:41:56,570][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:42:11,709][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:42:26,746][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:42:39,325][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
[2020-01-31T13:42:46,888][WARN ][r.suppressed ] [Node-2] path: /_template/.management-beats, params: {include_type_name=true, name=.management-beats}
et pour finir voici les WARN du 3eme
[2020-01-31T13:39:51,694][WARN ][o.e.x.m.MonitoringService] [Node-3] monitoring execution failed
[2020-01-31T13:40:10,894][WARN ][r.suppressed ] [Node-3] path: /_xpack/monitoring/_bulk, params: {system_id=kibana, system_api_version=6, interval=10000ms}
[2020-01-31T13:40:40,891][WARN ][r.suppressed ] [Node-3] path: /_xpack/monitoring/_bulk, params: {system_id=kibana, system_api_version=6, interval=10000ms}
[2020-01-31T13:41:01,696][WARN ][o.e.x.m.MonitoringService] [Node-3] monitoring execution failed
[2020-01-31T13:41:10,894][WARN ][r.suppressed ] [Node-3] path: /_xpack/monitoring/_bulk, params: {system_id=kibana, system_api_version=6, interval=10000ms}
[2020-01-31T13:41:40,894][WARN ][r.suppressed ] [Node-3] path: /_xpack/monitoring/_bulk, params: {system_id=kibana, system_api_version=6, interval=10000ms}
[2020-01-31T13:42:10,901][WARN ][r.suppressed ] [Node-3] path: /_xpack/monitoring/_bulk, params: {system_id=kibana, system_api_version=6, interval=10000ms}
[2020-01-31T13:42:11,696][WARN ][o.e.x.m.MonitoringService] [Node-3] monitoring execution failed
Merci d'avance de l'aide