We are adding data to our 2 indexes( interaction-open_read_model_12 and interaction-closed_read_model_12) with aliases(list_open_interactions_12 && list_closed_interactions_1). we have 3 node system.
Almost in one day, we are adding 60 million data in these 2 indexes.
Somehow after adding 100 million or 120 million of data, index got deleted automatically.
When i debugged it , i didnt find any trace why index got deleted automatically. Checked for policy , we have policy which is only for interaction-closed_read_model index which is time based (120 days) .that is for sure not doing anything.
Checked for elastic logs as well as if there is cluster restart happened or not. checked for memory as well. enough memory is available.
Note: Aliases don't delete . only these 2's :- interaction-open_read_model_12 and interaction-closed_read_model_12 deletion happens. This also i want to know if somehow index deleted then why aliases didnt delete as well.
Need to know what next to be checked to analyze the index deletion.
Welcome to our community!
Is there anything in your Elasticsearch logs what show these being deleted?
@warkolm Unfortunately, there are no elastic logs indicating index deletion.
only logs which were present in my cluster file:
just 4-5 hrs back , one new type of logs started coming:(i think in one of the elastic post , i have read that these are not elastic sepecific)
Please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
@warkolm please find the logs below : at 2021-02-27T23:44 deletion happened and again at 2021-02-27T23:45.18 my code again created the index. want to know why deletion happened after adding around 22 crore data in interaction-closed_read_model_1 and interaction-open_read_model_1 indexes:
[2021-02-27T23:44:02,672][DEBUG][o.e.c.c.PublicationTransportHandler] [node-1] received diff cluster state version [185962] with uuid [ojDy5hJRRgKh4KlVSVEGFQ], diff size [334]
[2021-02-27T23:44:02,718][DEBUG][o.e.c.s.ClusterApplierService] [node-1] processing [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]: execute
[2021-02-27T23:44:02,718][DEBUG][o.e.c.s.ClusterApplierService] [node-1] cluster state updated, version [185962], source [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-1}{GUwYQtiBQje-PvtA9SmufQ}{fGIObAj3TRipx_rr9nI_fA}{10.0.0.7}{10.0.0.7:9300}{dilm}{ml.machine_memory=31449726976, xpack.installed=true, ml.max_open_jobs=20}
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-2}{6HjYRLd4TsGvZ8Ykzy0JpA}{u_pdGdZUSq6kURY5qykAwg}{10.0.0.6}{10.0.0.6:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,719][DEBUG][o.e.c.s.ClusterApplierService] [node-1] applying settings from cluster state with version 185962
[2021-02-27T23:44:02,719][DEBUG][o.e.c.s.ClusterApplierService] [node-1] apply cluster state with version 185962
[2021-02-27T23:44:02,719][DEBUG][o.e.i.c.IndicesClusterStateService] [node-1] [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]] cleaning index, no longer part of the metadata
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] closing ... (reason [DELETED])
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] closing index service (reason [DELETED][index no longer part of the metadata])
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [0] closing... (reason: [index no longer part of the metadata])
[2021-02-27T23:44:02,720][DEBUG][o.e.i.s.IndexShard ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
[2021-02-27T23:44:02,720][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] close now acquiring writeLock
[2021-02-27T23:44:02,720][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] close acquired writeLock
[2021-02-27T23:44:02,723][DEBUG][o.e.i.t.Translog ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] translog closed
[2021-02-27T23:44:02,742][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] engine closed [api]
[2021-02-27T23:44:03,617][DEBUG][o.e.i.s.Store ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][0] store reference count on close: 0
[2021-02-27T23:44:03,617][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [0] closed (reason: [index no longer part of the metadata])
[2021-02-27T23:44:03,617][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [1] closing... (reason: [index no longer part of the metadata])
[2021-02-27T23:44:03,617][DEBUG][o.e.i.s.IndexShard ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
[2021-02-27T23:44:03,618][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] close now acquiring writeLock
[2021-02-27T23:44:03,618][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] close acquired writeLock
[2021-02-27T23:44:03,621][DEBUG][o.e.i.t.Translog ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] translog closed
[2021-02-27T23:44:03,630][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] engine closed [api]
[2021-02-27T23:44:04,311][DEBUG][o.e.i.s.Store ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][1] store reference count on close: 0
[2021-02-27T23:44:04,311][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [1] closed (reason: [index no longer part of the metadata])
[2021-02-27T23:44:04,311][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [2] closing... (reason: [index no longer part of the metadata])
[2021-02-27T23:44:04,311][DEBUG][o.e.i.s.IndexShard ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
[2021-02-27T23:44:04,311][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] close now acquiring writeLock
[2021-02-27T23:44:04,311][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] close acquired writeLock
[2021-02-27T23:44:04,313][DEBUG][o.e.i.t.Translog ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] translog closed
[2021-02-27T23:44:04,328][DEBUG][o.e.i.e.Engine ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] engine closed [api]
[2021-02-27T23:44:05,233][DEBUG][o.e.i.s.Store ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001][2] store reference count on close: 0
[2021-02-27T23:44:05,233][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [2] closed (reason: [index no longer part of the metadata])
[2021-02-27T23:44:05,233][DEBUG][o.e.i.c.b.BitsetFilterCache] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] clearing all bitsets because [close]
[2021-02-27T23:44:05,234][DEBUG][o.e.x.s.a.a.OptOutQueryCache] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] full cache clear, reason [close]
[2021-02-27T23:44:05,235][DEBUG][o.e.i.c.b.BitsetFilterCache] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] clearing all bitsets because [close]
[2021-02-27T23:44:05,262][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] closed... (reason [DELETED][index no longer part of the metadata])
[2021-02-27T23:44:05,262][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] deleting index store reason [index no longer part of the metadata]
[2021-02-27T23:44:05,271][DEBUG][o.e.c.s.ClusterApplierService] [node-1] set locally applied cluster state to version 185962
[2021-02-27T23:44:05,271][DEBUG][o.e.x.s.s.SecurityIndexManager] [node-1] Index [.security] is not available - no metadata
[2021-02-27T23:44:05,271][DEBUG][o.e.x.s.s.SecurityIndexManager] [node-1] Index [.security-tokens] is not available - no metadata
[2021-02-27T23:44:05,272][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] processing pending deletes
[2021-02-27T23:44:05,273][DEBUG][o.e.l.LicenseService ] [node-1] previous [LicensesMetaData{license={"uid":"97a48578-7897-495d-9c2a-e915e31c2483","type":"basic","issue_date_in_millis":1603818714114,"max_nodes":1000,"issued_to":"fusion-cluster","issuer":"elasticsearch","signature":"/////AAAANBmHmEg8/f+rp7G4NjtaihGwWrTFN8l5G2imKxLBFY651MCnvtOh8rPFqSBUu/JpU2D5j95mIfUcdlTJHVdk9NpPTFMKL5wC7BXn4CTS6kNo/CBB5aWWIF4O1KvAGtJL/ur8V25UiJpE3QWCb0mA4Ii7A6s8ifceV1hsGL4+iPXBfOhSsHlE53EcfGmMffDRO+qERw4rcNPqkRyRwrC4yCasQRBhQYU8kM+ddTk6/lr1+uXNokrMI+I+cnqsCcoBcUfoyWHFXBUWXsLBHTjLbIC","start_date_in_millis":-1}, trialVersion=null}]
[2021-02-27T23:44:05,274][DEBUG][o.e.l.LicenseService ] [node-1] current [LicensesMetaData{license={"uid":"97a48578-7897-495d-9c2a-e915e31c2483","type":"basic","issue_date_in_millis":1603818714114,"max_nodes":1000,"issued_to":"fusion-cluster","issuer":"elasticsearch","signature":"/////AAAANBmHmEg8/f+rp7G4NjtaihGwWrTFN8l5G2imKxLBFY651MCnvtOh8rPFqSBUu/JpU2D5j95mIfUcdlTJHVdk9NpPTFMKL5wC7BXn4CTS6kNo/CBB5aWWIF4O1KvAGtJL/ur8V25UiJpE3QWCb0mA4Ii7A6s8ifceV1hsGL4+iPXBfOhSsHlE53EcfGmMffDRO+qERw4rcNPqkRyRwrC4yCasQRBhQYU8kM+ddTk6/lr1+uXNokrMI+I+cnqsCcoBcUfoyWHFXBUWXsLBHTjLbIC","start_date_in_millis":-1}, trialVersion=null}]
[2021-02-27T23:44:05,274][DEBUG][o.e.c.s.ClusterApplierService] [node-1] processing [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]: took [2.6s] done applying updated cluster state (version: 185962, uuid: ojDy5hJRRgKh4KlVSVEGFQ)
[2021-02-27T23:44:10,661][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-1] Closing expired connections
[2021-02-27T23:44:15,975][DEBUG][o.e.c.c.PublicationTransportHandler] [node-1] received diff cluster state version [185963] with uuid [4yyip22OSrKkQujxeGuOEg], diff size [322]
[2021-02-27T23:44:16,019][DEBUG][o.e.c.s.ClusterApplierService] [node-1] processing [ApplyCommitRequest{term=17, version=185963, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]: execute
[2021-02-27T23:44:16,020][DEBUG][o.e.c.s.ClusterApplierService] [node-1] cluster state updated, version [185963], source [ApplyCommitRequest{term=17, version=185963, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]
[2021-02-27T23:44:16,020][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-1}{GUwYQtiBQje-PvtA9SmufQ}{fGIObAj3TRipx_rr9nI_fA}{10.0.0.7}{10.0.0.7:9300}{dilm}{ml.machine_memory=31449726976, xpack.installed=true, ml.max_open_jobs=20}
[2021-02-27T23:44:16,020][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:16,020][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-2}{6HjYRLd4TsGvZ8Ykzy0JpA}{u_pdGdZUSq6kURY5qykAwg}{10.0.0.6}{10.0.0.6:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:16,020][DEBUG][o.e.c.s.ClusterApplierService] [node-1] applying settings from cluster state with version 185963
[2021-02-27T23:44:16,020][DEBUG][o.e.c.s.ClusterApplierService] [node-1] apply cluster state with version 185963
[2021-02-27T23:44:16,022][DEBUG][o.e.i.c.IndicesClusterStateService] [node-1] [[interaction-open_read_model_1/7ioUsKZpRAG1z7k9EMXTPA]] cleaning index, no longer part of the metadata
[2021-02-27T23:44:16,022][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-open_read_model_1] closing ... (reason [DELETED])
[2021-02-27T23:44:16,022][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-open_read_model_1/7ioUsKZpRAG1z7k9EMXTPA] closing index service (reason [DELETED][index no longer part of the metadata])
[2021-02-27T23:44:16,022][DEBUG][o.e.i.IndexService ] [node-1] [interaction-open_read_model_1] [0] closing... (reason: [index no longer part of the metadata])
[2021-02-27T23:44:16,023][DEBUG][o.e.i.s.IndexShard ] [node-1] [interaction-open_read_model_1][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
Which version of Elasticsearch are you using? How is your cluster configured?
@Christian_Dahlqvist Please refer the details below:
curl -XGET "http://10.0.0.7:9200/_cat/nodes?v"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.0.0.7 45 99 25 1.29 1.29 1.37 dilm - node-1
10.0.0.5 30 94 0 0.03 0.25 0.39 dilm * node-3
10.0.0.6 28 93 0 0.03 0.28 0.37 dilm - node-2
curl -XGET "http://10.0.0.7:9200/"
{
"name" : "node-1",
"cluster_name" : "fusion-cluster",
"cluster_uuid" : "8P_5Xoi8Rw6lZq4ZgtllVg",
"version" : {
"number" : "7.4.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "22e1767283e61a198cb4db791ea66e3f11ab9910",
"build_date" : "2019-09-27T08:36:48.569419Z",
"build_snapshot" : false,
"lucene_version" : "8.2.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Do you have any ILM policies in place that could cause this?
@Christian_Dahlqvist we have below policies:
{
"closed_index_policy" : {
"version" : 1472,
"modified_date" : "2021-02-26T16:21:56.329Z",
"policy" : {
"phases" : {
"hot" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "120d"
},
"set_priority" : {
"priority" : 100
}
}
}
}
}
},
"slm-history-ilm-policy" : {
"version" : 1,
"modified_date" : "2020-10-27T17:11:54.071Z",
"policy" : {
"phases" : {
"hot" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "50gb",
"max_age" : "30d"
}
}
},
"delete" : {
"min_age" : "90d",
"actions" : {
"delete" : { }
}
}
}
}
},
"watch-history-ilm-policy" : {
"version" : 1,
"modified_date" : "2020-10-27T17:11:54.020Z",
"policy" : {
"phases" : {
"delete" : {
"min_age" : "7d",
"actions" : {
"delete" : { }
}
}
}
}
}
}
and we have only assigned closed_index_policy on interaction-closed_read_model_1.
curl -XGET "http://10.0.0.7:9200/list*/_ilm/explain?pretty"
{
"indices" : {
"interaction-open_read_model_1" : {
"index" : "interaction-open_read_model_1",
"managed" : false
},
"interaction-closed_read_model_1-2021.02.27-00001" : {
"index" : "interaction-closed_read_model_1-2021.02.27-00001",
"managed" : true,
"policy" : "closed_index_policy",
"lifecycle_date_millis" : 1614469518610,
"age" : "12.43h",
"phase" : "hot",
"phase_time_millis" : 1614469519007,
"action" : "rollover",
"action_time_millis" : 1614469766666,
"step" : "check-rollover-ready",
"step_time_millis" : 1614469766666,
"phase_execution" : {
"policy" : "closed_index_policy",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_age" : "120d"
},
"set_priority" : {
"priority" : 100
}
}
},
"version" : 1472,
"modified_date_in_millis" : 1614356516329
}
}
}
}
The closed_index_policy
appears to have a delete phase in it. Can you check if you have any index templates that assign this to the indices that are being deleted? Can you check if any indices that should not have this applied have it by mistake?
@Christian_Dahlqvist we have only 2 indexes: interaction-open_read_model_1 and interaction-closed_read_model_1-2021.02.27-00001. On interaction-closed_read_model_1-2021.02.27-00001 we have applied closed_index_policy.
BTW both indexes get deleted after some time (1-2 days ,around 10-12 crore data...no specific condition).
You said that closed_index_policy has delete phase in it.. could you predict the condition for deletion?. In my opinion, through closed_index_policy rollup will happen after 120 days.And it will also not delete the index right?. Furthermore index deletes within 2 days (not 120 days).Please have a look on the policy again
"closed_index_policy":{
"version":1472,
"modified_date":"2021-02-26T16:21:56.329Z",
"policy":{
"phases":{
"hot":{
"min_age":"0ms",
"actions":{
"rollover":{
"max_age":"120d"
},
"set_priority":{
"priority":100
}
}
}
}
}
Can you check the logs on all nodes for log message about the index getting deleted (apart from what you have posted)? There might be an info level message on one of the nodes.
@Christian_Dahlqvist i am adding logs of all the nodes i have.
Node -3
[2021-02-27T23:43:42,434][DEBUG][o.e.i.s.ReplicationTracker] [node-3] [interaction-closed_read_model_1-2021.02.22-00001][2] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=24936, leases={peer_recovery/GUwYQtiBQje-PvtA9SmufQ=RetentionLease{id='peer_recovery/GUwYQtiBQje-PvtA9SmufQ', retainingSequenceNumber=59941391, timestamp=1614468672369, source='peer recovery'}, peer_recovery/aps90vcbR_uIgd88d9B0Rg=RetentionLease{id='peer_recovery/aps90vcbR_uIgd88d9B0Rg', retainingSequenceNumber=59941391, timestamp=1614468672369, source='peer recovery'}, peer_recovery/6HjYRLd4TsGvZ8Ykzy0JpA=RetentionLease{id='peer_recovery/6HjYRLd4TsGvZ8Ykzy0JpA', retainingSequenceNumber=59941391, timestamp=1614468672369, source='peer recovery'}}}]
[2021-02-27T23:43:51,392][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-3] Closing expired connections
[2021-02-27T23:44:01,393][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-3] Closing expired connections
[2021-02-27T23:44:02,655][DEBUG][o.e.c.s.MasterService ] [node-3] executing cluster state update for [delete-index [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]]]
[2021-02-27T23:44:02,656][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-3] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] deleting index
[2021-02-27T23:44:02,658][DEBUG][o.e.c.s.MasterService ] [node-3] took [0s] to compute cluster state update for [delete-index [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]]]
[2021-02-27T23:44:02,659][DEBUG][o.e.c.s.MasterService ] [node-3] cluster state updated, version [185962], source [delete-index [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]]]
[2021-02-27T23:44:02,659][DEBUG][o.e.c.s.MasterService ] [node-3] publishing cluster state version [185962]
[2021-02-27T23:44:02,664][DEBUG][o.e.c.c.PublicationTransportHandler] [node-3] received diff cluster state version [185962] with uuid [ojDy5hJRRgKh4KlVSVEGFQ], diff size [334]
[2021-02-27T23:44:08,921][DEBUG][o.e.c.s.ClusterApplierService] [node-3] processing [Publication{term=17, version=185962}]: execute
[2021-02-27T23:44:08,921][DEBUG][o.e.c.s.ClusterApplierService] [node-3] cluster state updated, version [185962], source [Publication{term=17, version=185962}]
[2021-02-27T23:44:08,922][DEBUG][o.e.c.NodeConnectionsService] [node-3] connected to {node-1}{GUwYQtiBQje-PvtA9SmufQ}{fGIObAj3TRipx_rr9nI_fA}{10.0.0.7}{10.0.0.7:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:08,922][DEBUG][o.e.c.NodeConnectionsService] [node-3] connected to {node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, xpack.installed=true, ml.max_open_jobs=20}
[2021-02-27T23:44:08,922][DEBUG][o.e.c.NodeConnectionsService] [node-3] connected to {node-2}{6HjYRLd4TsGvZ8Ykzy0JpA}{u_pdGdZUSq6kURY5qykAwg}{10.0.0.6}{10.0.0.6:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:08,922][DEBUG][o.e.c.s.ClusterApplierService] [node-3] applying settings from cluster state with version 185962
[2021-02-27T23:44:08,923][DEBUG][o.e.c.s.ClusterApplierService] [node-3] apply cluster state with version 185962
[2021-02-27T23:44:08,923][DEBUG][o.e.i.c.IndicesClusterStateService] [node-3] [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]] cleaning index, no longer part of the metadata
[2021-02-27T23:44:08,923][DEBUG][o.e.i.IndicesService ] [node-3] [interaction-closed_read_model_1-2021.02.22-00001] closing ... (reason [DELETED])
Node -2
[2021-02-27T23:43:35,030][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-2] Closing expired connections
[2021-02-27T23:43:45,031][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-2] Closing expired connections
[2021-02-27T23:43:48,302][DEBUG][o.e.i.s.ReplicationTracker] [node-2] [read_me][0] no retention leases are expired from current retention leases [RetentionLeases{primaryTerm=5, version=56, leas
es={peer_recovery/aps90vcbR_uIgd88d9B0Rg=RetentionLease{id='peer_recovery/aps90vcbR_uIgd88d9B0Rg', retainingSequenceNumber=1, timestamp=1614460307964, source='peer recovery'}, peer_recovery/6Hj
YRLd4TsGvZ8Ykzy0JpA=RetentionLease{id='peer_recovery/6HjYRLd4TsGvZ8Ykzy0JpA', retainingSequenceNumber=1, timestamp=1614460307964, source='peer recovery'}}}]
[2021-02-27T23:43:55,031][DEBUG][o.a.h.i.c.PoolingHttpClientConnectionManager] [node-2] Closing expired connections
[2021-02-27T23:44:02,675][DEBUG][o.e.c.c.PublicationTransportHandler] [node-2] received diff cluster state version [185962] with uuid [ojDy5hJRRgKh4Kl
VSVEGFQ], diff size [334]
[2021-02-27T23:44:02,710][DEBUG][o.e.c.s.ClusterApplierService] [node-2] processing [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]: execute
[2021-02-27T23:44:02,710][DEBUG][o.e.c.s.ClusterApplierService] [node-2] cluster state updated, version [185962], source [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]
[2021-02-27T23:44:02,711][DEBUG][o.e.c.NodeConnectionsService] [node-2] connected to {node-1}{GUwYQtiBQje-PvtA9SmufQ}{fGIObAj3TRipx_rr9nI_fA}{10.0.0.7}{10.0.0.7:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,711][DEBUG][o.e.c.NodeConnectionsService] [node-2] connected to {node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,711][DEBUG][o.e.c.NodeConnectionsService] [node-2] connected to {node-2}{6HjYRLd4TsGvZ8Ykzy0JpA}{u_pdGdZUSq6kURY5qykAwg}{10.0.0.6}{10.0.0.6:9300}{dilm}{ml.machine_memory=31449726976, xpack.installed=true, ml.max_open_jobs=20}
[2021-02-27T23:44:02,711][DEBUG][o.e.c.s.ClusterApplierService] [node-2] applying settings from cluster state with version 185962
[2021-02-27T23:44:02,712][DEBUG][o.e.c.s.ClusterApplierService] [node-2] apply cluster state with version 185962
[2021-02-27T23:44:02,712][DEBUG][o.e.i.c.IndicesClusterStateService] [node-2] [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]] cleaning index, no longer part of the metadata
[2021-02-27T23:44:02,712][DEBUG][o.e.i.IndicesService ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001] closing ... (reason [DELETED])
[2021-02-27T23:44:02,713][DEBUG][o.e.i.IndicesService ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] closing index service (reason [DELETED][index no longer part of the metadata])
[2021-02-27T23:44:02,713][DEBUG][o.e.i.IndexService ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001] [0] closing... (reason: [index no longer part of the metadata])
[2021-02-27T23:44:02,713][DEBUG][o.e.i.s.IndexShard ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata]
[2021-02-27T23:44:02,714][DEBUG][o.e.i.e.Engine ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001][0] close now acquiring writeLock
[2021-02-27T23:44:02,714][DEBUG][o.e.i.e.Engine ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001][0] close acquired writeLock
[2021-02-27T23:44:02,718][DEBUG][o.e.i.t.Translog ] [node-2] [interaction-closed_read_model_1-2021.02.22-00001][0] translog closed
Node -1
[2021-02-27T23:44:02,718][DEBUG][o.e.c.s.ClusterApplierService] [node-1] cluster state updated, version [185962], source [ApplyCommitRequest{term=17, version=185962, sourceNode={node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}}]
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-1}{GUwYQtiBQje-PvtA9SmufQ}{fGIObAj3TRipx_rr9nI_fA}{10.0.0.7}{10.0.0.7:9300}{dilm}{ml.machine_memory=31449726976, xpack.installed=true, ml.max_open_jobs=20}
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-3}{aps90vcbR_uIgd88d9B0Rg}{ArS4k9tkTlWKyFrUOQsIKA}{10.0.0.5}{10.0.0.5:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,718][DEBUG][o.e.c.NodeConnectionsService] [node-1] connected to {node-2}{6HjYRLd4TsGvZ8Ykzy0JpA}{u_pdGdZUSq6kURY5qykAwg}{10.0.0.6}{10.0.0.6:9300}{dilm}{ml.machine_memory=31449726976, ml.max_open_jobs=20, xpack.installed=true}
[2021-02-27T23:44:02,719][DEBUG][o.e.c.s.ClusterApplierService] [node-1] applying settings from cluster state with version 185962
[2021-02-27T23:44:02,719][DEBUG][o.e.c.s.ClusterApplierService] [node-1] apply cluster state with version 185962
[2021-02-27T23:44:02,719][DEBUG][o.e.i.c.IndicesClusterStateService] [node-1] [[interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg]] cleaning index, no longer part of the metadata
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] closing ... (reason [DELETED])
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndicesService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001/Cg9Jqpq1TF6ebC5MmBRDrg] closing index service (reason [DELETED][index no longer part of the metadata])
[2021-02-27T23:44:02,720][DEBUG][o.e.i.IndexService ] [node-1] [interaction-closed_read_model_1-2021.02.22-00001] [0] closing... (reason: [index no longer part of the metadata])
@Christian_Dahlqvist Apart from the above logs, i am also sharing below logs which may help to debug it more
node 1:
[2021-02-27T23:28:59,358][DEBUG][o.e.i.IndexingMemoryController] [node-1] now write some indexing buffers: total indexing heap bytes used [819.1mb] vs indices.memory.index_buffer_size [815.8mb], currently writing bytes [0b], [3] shards with non-zero indexing buffer
[2021-02-27T23:28:59,358][DEBUG][o.e.i.IndexingMemoryController] [node-1] write indexing buffer to disk for shard [[interaction-closed_read_model_1-2021.02.22-00001][1]] to free up its [273.4mb] indexing buffer
[2021-02-27T23:34:04,539][DEBUG][o.e.i.IndexingMemoryController] [node-1] now write some indexing buffers: total indexing heap bytes used [817.2mb] vs indices.memory.index_buffer_size [815.8mb], currently writing bytes [0b], [6] shards with non-zero indexing buffer
[2021-02-27T23:34:04,539][DEBUG][o.e.i.IndexingMemoryController] [node-1] write indexing buffer to disk for shard [[interaction-closed_read_model_1-2021.02.22-00001][0]] to free up its [304.8mb] indexing buffer
node 3:
[2021-02-27T23:39:26,354][DEBUG][o.e.x.i.IndexLifecycleRunner] [node-3] policy [closed_index_policy] for index [interaction-closed_read_model_1-2021.02.22-00001] on an error step, skipping execution
@warkolm @Christian_Dahlqvist please update on this
FYI we don't provide SLAs here, if you need one we can help connect you with our team for a subscription
If there's nothing in your ILM policies then something externally must be making the request. Do you have Security enabled?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.