High disk watermark

I deleted indexes in Kibana the first time I got this error and then my console showed me low disk watermark. But with 800000 docs on index, I keep getting high disk watermark! How can I fix this for future as I intend on ingesting real time logs?

[2019-11-15T11:44:59,440][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mehak-VirtualBox] low disk watermark [2gb] exceeded on [uszTm_0tR26zJa0KI9beBw][mehak-VirtualBox][/home/mehak/Documents/elasticsearch-7.4.0/data/nodes/0] free: 1gb[7.7%], replicas will not be assigned to this node
[2019-11-15T11:45:44,230][INFO ][o.e.m.j.JvmGcMonitorService] [mehak-VirtualBox] [gc][18074] overhead, spent [284ms] collecting in the last [1s]
[2019-11-15T11:45:54,239][INFO ][o.e.m.j.JvmGcMonitorService] [mehak-VirtualBox] [gc][18084] overhead, spent [432ms] collecting in the last [1s]
[2019-11-15T11:45:59,472][WARN ][o.e.c.r.a.DiskThresholdMonitor] [mehak-VirtualBox] high disk watermark [1gb] exceeded on [uszTm_0tR26zJa0KI9beBw][mehak-VirtualBox][/home/mehak/Documents/elasticsearch-7.4.0/data/nodes/0] free: 878.2mb[6.3%], shards will be relocated away from this node
[2019-11-15T11:45:59,472][INFO ][o.e.c.r.a.DiskThresholdMonitor] [mehak-VirtualBox] rerouting shards: [high disk watermark exceeded on one or more nodes]
[2019-11-15T11:46:59,517][WARN ][o.e.c.r.a.DiskThresholdMonitor] [mehak-VirtualBox] high disk watermark [1gb] exceeded on [uszTm_0tR26zJa0KI9beBw][mehak-VirtualBox][/home/mehak/Documents/elasticsearch-7.4.0/data/nodes/0] free: 799.3mb[5.8%]

Of course the obvious answer is to deploy more storage capacity.

There are some helpful tips in Tune for disk usage.

Following the page, I ran this to remove the "_type" : "_doc"

PUT filebeat-7.4.0-2019.11.08-000001 
{
  "mappings": {
    "properties": {
      "_type": {
        "type": "text",
        "index": false
      }
    }
  }
}

but I got this error--


  "error": {
    "root_cause": [
      {
        "type": "resource_already_exists_exception",
        "reason": "index [filebeat-7.4.0-2019.11.08-000001/lMcCHhWuT9ecTfsI4OyGEA] already exists",
        "index_uuid": "lMcCHhWuT9ecTfsI4OyGEA",
        "index": "filebeat-7.4.0-2019.11.08-000001"
      }
    ],
    "type": "resource_already_exists_exception",
    "reason": "index [filebeat-7.4.0-2019.11.08-000001/lMcCHhWuT9ecTfsI4OyGEA] already exists",
    "index_uuid": "lMcCHhWuT9ecTfsI4OyGEA",
    "index": "filebeat-7.4.0-2019.11.08-000001"
  },
  "status": 400

It isn't possible to change the mapping of an existing field in an index.

Oh, got it. So I should create a new index and only in it I can change these values?

yellow open   filebeat-7.4.0-2019.11.08-000001 lMcCHhWuT9ecTfsI4OyGEA   1   1    9076656            0    866.3mb

What can i do to fix the current index I have? Can i not delete the 50000 out of 9076656 docs on this index? Please help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.