when in a node delete index, this index and from other nodes synchronization
curl -XDELETE http://192.168.31.25:9292/logstash-task-33091-20170531
when in a node delete index, this index and from other nodes synchronization
curl -XDELETE http://192.168.31.25:9292/logstash-task-33091-20170531
I am not sure I understand what you are asking. Did the delete of this index fail? If so, was there an error message or did it just time out?
Based on the name of your index it looks like you have have a lot of indices, which can be very inefficient. How many indices and shards do you have in the cluster? How much data? What is the size of the cluster?
this img is in master
cluster: 2
index: 278
shards/cluster: 278
can you see the picture?
Yes, that seems to be a reasonable number of indices. What error do you get? Is there anything in the Elasticsearch logs?
nothing...
[2017-06-06T20:20:00,570][INFO ][o.e.c.m.MetaDataCreateIndexService] [es-01] [logstash-task-521874-20170531] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[1], mappings [_default_]
[2017-06-06T20:20:00,786][INFO ][o.e.c.r.a.AllocationService] [es-01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-task-521874-20170531][0]] ...]).
[2017-06-06T20:20:01,415][INFO ][o.e.c.m.MetaDataMappingService] [es-01] [logstash-task-521874-20170531/ekwOc4OsRXucCaCjChod7g] create_mapping [logs]
What was the response from your curl request?
[root@es-01 elasticsearch]# curl -v -XDELETE http://xxx.xxx.25.82:9292/logstash-task-521874-20170531
* About to connect() to xxx.xxx.25.82 port 9292 (#0)
* Trying xxx.xxx.25.82...
* Connected to xxx.xxx.25.82 (xxx.xxx.25.82) port 9292 (#0)
> DELETE /logstash-task-521874-20170531 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: xxx.xxx.25.82:9292
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
<
* Connection #0 to host xxx.xxx.25.82 left intact
{"acknowledged":true}
That seems to have worked. What do you get if you run curl http://xxx.xxx.25.82:9292/_cat/indices | grep logstash-task-521874-20170531
?
[root@es-01 elasticsearch]# curl -v -XDELETE http://10.173.25.82:9292/logstash-task-521874-20170531
* About to connect() to 10.173.25.82 port 9292 (#0)
* Trying 10.173.25.82...
* Connected to 10.173.25.82 (10.173.25.82) port 9292 (#0)
> DELETE /logstash-task-521874-20170531 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.173.25.82:9292
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
<
* Connection #0 to host 10.173.25.82 left intact
{"acknowledged":true}[root@es-01 elasticsearch]#
[root@es-01 elasticsearch]# curl http://10.173.25.82:9292/_cat/indices | grep logstash-task-521874-20170531
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 25676 100 25676 0 0 106k 0 --:--:-- --:--:-- --:--:-- 106k
green open logstash-task-521874-20170531 3FQrGE4qRjCVS-ZEuJUhow 1 1 223 0 133.1kb 66.4kb
In addition to logstash index in this cluster, there are other indices. other indices can be deleted
this logstash's template
{
"template": "logstash-*",
"settings": {
"index.number_of_shards" : 1,
"number_of_replicas" : 1,
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"mapping": {
"norms": false,
"type": "text"
},
"match_mapping_type": "string"
}
},
{
"string_fields": {
"mapping": {
"norms": false,
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
},
"match_mapping_type": "string",
"match": "*"
}
}
],
"_all": {
"norms": false,
"enabled": true
},
"properties": {
"@timestamp": {
"include_in_all": false,
"type": "date"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"latitude": {
"type": "half_float"
},
"location": {
"type": "geo_point"
},
"longitude": {
"type": "half_float"
}
}
},
"@version": {
"include_in_all": false,
"type": "keyword"
}
}
}
},
"aliases": {}
}
Do you have anything indexing into this index, which would cause it to be recreated? Does it have the same hash identifier before and after deleting it? Is there anything in the Elasticsearch logs about this index? What do you get if you run curl http://10.173.25.82:9292/_cat/nodes
?
[root@cc ~]# curl http://10.173.25.82:9292/_cat/nodes
10.173.25.82 64 96 77 1.23 1.33 1.39 mdi * es-01
10.44.13.39 59 95 46 1.26 1.94 2.17 di - es-02
No any other exception,the picture above shows the operation is continuous, these indices also didn't do other operations
well, We have cleaning indices‘s program, I found that can normal cleaning in the morning today.
however, why?
If they are returning after a delete, then Logstash is recreating them because it is receiving data that it sees should be in that index.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.