How to properly delete index?

Hi guys.
I was very happy playing with my filebeat / netflow platform untill today.
There was nothing to see and founded following message at log file:

high disk watermark [90%] exceeded on [NX79WFORStGfAdCq26XLaw][ubuntu-elk][/var/lib/elasticsearch/nodes/0] free: 11.8gb[8.1%], shards will be relocated away fr
om this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete

An it was true ... disk capacity was at 90%.
After checked my indexes I saw 3 indexes created by day about 50G each.
So I deleted them doing:
DELETE /filebeat-7.9.0-2020.09.02-000003

After that I saw at elasticsearch log file:

ow disk watermark [85%] no longer exceeded on [NX79WFORStGfAdCq26XLaw][ubuntu-elk][/var/lib/elasticsearch/nodes/0] free: 111.3gb[76.2%]

So ... I thought It would work again.
I restarted filebeat and elasticsearch but now I have this at kibana:

search_phase_execution_exception
all shards failed

It seems I did not remove indexes properly and now database is broken.
After that I looked and removed unsassigned shards.
Now it is working again.

Im using single node setting since im learning and this is not production enviroment.
Filebeat/netflow module stores 50Gb per day ... so I have 3 days to full fill my disk again

How can I prevent indexes grow until full fill my disk ? Is there same way of "rotating" file , lets say to keep last two days data.
I would like to avoid using a second node/cluster if possible.

What is propper way to get rid of unussed data ?

Thanks again.
Leandro.

Hey Leandro,

sounds like you're looking for the index lifecycle management.
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html

Here you can automate the four phases of the index lifecycle (hot, warm, cold and delete). You have a lot of options to choose from, for example: Maximum index size, max amount of documents, maximum age (in days/hours/etc).

1 Like

Hi @leostereo
Welcome to running a cluster

So 1 thing after you exceed the watermark, elasticsearch protects itself by going to read only mode.

After you clean out some space you need to put the indices back into write mode.

Note not meant for production but you can run this command after you delete some indices this will put the indices back into write mode.

PUT /.*/_settings
{
  "index": {
    "blocks.read_only_allow_delete" : false,
    "blocks.read_only": false
  }
}

Then as @madduck suggested you can use ILM and put a phase to delete... that or you need to continue to delete by hand

One other thing I do when I have a small POC cluster I also update the ILM policy to make each index 10GB that way you can clean up smaller chunks

PUT /_ilm/policy/metricbeat
{
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "10gb",
              "max_age" : "30d"
            }
          }
        }
      }
    }
}

Then you can figure out your delete phase

You can do all this through Kibana UI -> Stack Managenent -> Index Lifecycle Policies

1 Like

Ok ... I managed to create my policy.
I created following for my filebeats indices:

{
  "filebeat" : {
    "version" : 2,
    "modified_date" : "2020-09-04T16:46:46.668Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "60gb",
              "max_age" : "2d"
            },
            "set_priority" : {
              "priority" : null
            }
          }
        }
      }
    }
  },

Ok I think each index will grow up to 60 Gb and will keep only 2 days.
Letss see.
I think Im beginngin to like elk and its comunity.
Thanks.

1 Like

I think you are missing or at least I can not see what the next phase is... did you create a delete phase? otherwise it is just going to continue to grow / accumulate

You were right ...
A delete phase was missing.
My disk was full filled again.
I already modified the policy ... lets see what happend in two days.

  "filebeat" : {
    "version" : 3,
    "modified_date" : "2020-09-07T12:13:34.972Z",
    "policy" : {
      "phases" : {
        "hot" : {
          "min_age" : "0ms",
          "actions" : {
            "rollover" : {
              "max_size" : "60gb",
              "max_age" : "2d"
            },
            "set_priority" : {
              "priority" : null
            }
          }
        },
        "delete" : {
          "min_age" : "0d",
          "actions" : {
            "delete" : {
              "delete_searchable_snapshot" : true
            }
          }
        }
      }
    }
  },

Thanks for your advice again.
Leandro.

@leostereo

I don't think you need the delete_searchable_snapshot action since you did not create one as part of the lifecycle ...

See example here