Negative TTL records purging

I am running into an issue after a recent cluster crash, for which i had to restart nodes that contained around 170 gb of data.

TTL of all records are set to 15 days through index mapping. I notice now that after weeks of restoring data after that crash, a lot of negative TTL records exists growing the size of data to ~300 GB. I have a query that filters out such records with time of event < a specific data (that’s 15 days before current date) and i see quite a high negative TTL on them.

Issue:

I need to delete them even if TTL is not kicking in well since it’s non-deterministic on ES anyway. i used -XDELETE and on 2.3 it seems to have been decommissioned, so i was trying to use _delete_by_query plugin:

Command:

curl -XPOST 'http://:9200//_delete_by_query' -d '{
"query": {"filtered": {"filter": {"and": [{ "range": {"eventtime": {"gte": 0,"lte": 1477621495000 }}}]}}}}'

Error:
{"error": {"root_cause": [{"type": "invalid_type_name_exception","reason": "Document mapping type name can't start with ''"}],"type": "invalid_type_name_exception","reason": "Document mapping type name can't start with ''"},"status": 400}

Could anybody please help with an easier way to purge such records with negative TTL?

What version are you on.

"version" : {
"number" : "2.3.2",
"lucene_version" : "5.5.0"
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.