Failed to delete index


(zhenjie) #1

when in a node delete index, this index and from other nodes synchronization

curl -XDELETE http://192.168.31.25:9292/logstash-task-33091-20170531

(Christian Dahlqvist) #2

I am not sure I understand what you are asking. Did the delete of this index fail? If so, was there an error message or did it just time out?

Based on the name of your index it looks like you have have a lot of indices, which can be very inefficient. How many indices and shards do you have in the cluster? How much data? What is the size of the cluster?


(zhenjie) #3

this img is in master

cluster: 2
index: 278
shards/cluster: 278


(zhenjie) #4

can you see the picture?


(Christian Dahlqvist) #5

Yes, that seems to be a reasonable number of indices. What error do you get? Is there anything in the Elasticsearch logs?


(zhenjie) #6

nothing...


[2017-06-06T20:20:00,570][INFO ][o.e.c.m.MetaDataCreateIndexService] [es-01] [logstash-task-521874-20170531] creating index, cause [auto(bulk api)], templates [logstash], shards [1]/[1], mappings [_default_]
[2017-06-06T20:20:00,786][INFO ][o.e.c.r.a.AllocationService] [es-01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[logstash-task-521874-20170531][0]] ...]).
[2017-06-06T20:20:01,415][INFO ][o.e.c.m.MetaDataMappingService] [es-01] [logstash-task-521874-20170531/ekwOc4OsRXucCaCjChod7g] create_mapping [logs]

(Christian Dahlqvist) #7

What was the response from your curl request?


(zhenjie) #8
[root@es-01 elasticsearch]# curl -v -XDELETE http://xxx.xxx.25.82:9292/logstash-task-521874-20170531
* About to connect() to xxx.xxx.25.82 port 9292 (#0)
*   Trying xxx.xxx.25.82...
* Connected to xxx.xxx.25.82 (xxx.xxx.25.82) port 9292 (#0)
> DELETE /logstash-task-521874-20170531 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: xxx.xxx.25.82:9292
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
< 
* Connection #0 to host xxx.xxx.25.82 left intact
{"acknowledged":true}

(Christian Dahlqvist) #9

That seems to have worked. What do you get if you run curl http://xxx.xxx.25.82:9292/_cat/indices | grep logstash-task-521874-20170531 ?


(zhenjie) #10
[root@es-01 elasticsearch]# curl -v -XDELETE http://10.173.25.82:9292/logstash-task-521874-20170531
* About to connect() to 10.173.25.82 port 9292 (#0)
*   Trying 10.173.25.82...
* Connected to 10.173.25.82 (10.173.25.82) port 9292 (#0)
> DELETE /logstash-task-521874-20170531 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.173.25.82:9292
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 21
< 
* Connection #0 to host 10.173.25.82 left intact
{"acknowledged":true}[root@es-01 elasticsearch]# 
[root@es-01 elasticsearch]#  curl http://10.173.25.82:9292/_cat/indices | grep logstash-task-521874-20170531        
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 25676  100 25676    0     0   106k      0 --:--:-- --:--:-- --:--:--  106k
green open logstash-task-521874-20170531 3FQrGE4qRjCVS-ZEuJUhow 1 1     223    0 133.1kb   66.4kb

(zhenjie) #11

In addition to logstash index in this cluster, there are other indices. other indices can be deleted


(zhenjie) #12

this logstash's template

{
    "template": "logstash-*",
    "settings": {
      "index.number_of_shards" : 1,  
      "number_of_replicas" : 1,  
      "index": {
        "refresh_interval": "5s"
      }
    },
    "mappings": {
      "_default_": {
        "dynamic_templates": [
          {
            "message_field": {
              "path_match": "message",
              "mapping": {
                "norms": false,
                "type": "text"
              },
              "match_mapping_type": "string"
            }
          },
          {
            "string_fields": {
              "mapping": {
                "norms": false,
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword"
                  }
                }
              },
              "match_mapping_type": "string",
              "match": "*"
            }
          }
        ],
        "_all": {
          "norms": false,
          "enabled": true
        },
        "properties": {
          "@timestamp": {
            "include_in_all": false,
            "type": "date"
          },
          "geoip": {
            "dynamic": true,
            "properties": {
              "ip": {
                "type": "ip"
              },
              "latitude": {
                "type": "half_float"
              },
              "location": {
                "type": "geo_point"
              },
              "longitude": {
                "type": "half_float"
              }
            }
          },
          "@version": {
            "include_in_all": false,
            "type": "keyword"
          }
        }
      }
    },
    "aliases": {}
}

(Christian Dahlqvist) #13

Do you have anything indexing into this index, which would cause it to be recreated? Does it have the same hash identifier before and after deleting it? Is there anything in the Elasticsearch logs about this index? What do you get if you run curl http://10.173.25.82:9292/_cat/nodes ?


(zhenjie) #14
[root@cc ~]# curl http://10.173.25.82:9292/_cat/nodes
10.173.25.82 64 96 77 1.23 1.33 1.39 mdi * es-01
10.44.13.39  59 95 46 1.26 1.94 2.17 di  - es-02

No any other exception,the picture above shows the operation is continuous, these indices also didn't do other operations


(zhenjie) #15

well, We have cleaning indices‘s program, I found that can normal cleaning in the morning today.
however, why?


(Mark Walkom) #16

If they are returning after a delete, then Logstash is recreating them because it is receiving data that it sees should be in that index.


(system) #17

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.