Indices filling up drive


#1

Team,

logstash.log rotation is working fine and ELK was working fine till the indices started filling the drive, then I stopped getting anything in the Kibana GUI.
This is the message in elasticsearch.log:
[2015-11-07 15:20:22,643][WARN ][cluster.routing.allocation.decider] [Death's Head] high disk watermark [10%] exceeded on [vt5xzs7lSiuXEBsKcYoV6Q][Death's Head] free: 6gb[8%], shards will be relocated away from this node

Is there any way to rotate the indices(sounds like a dumb question), but what is the solution to this issue. Thanks team.

Regards,

Kartik Vashishta


(Niraj Kumar) #2

You can use a tool named curator to delete indices that are 'n' days old.


#3

[root@test3 elasticsearch]# curator delete indices --older-than 2 --time-unit days --timestring 2015.11.07
2015-11-07 17:08:07,338 INFO Job starting: delete indices
Traceback (most recent call last):
File "/usr/bin/curator", line 11, in
sys.exit(main())
File "/usr/lib/python2.7/site-packages/curator/curator.py", line 5, in main
cli( obj={ "filters": [] } )
File "/usr/lib/python2.7/site-packages/click/core.py", line 700, in call
return self.main(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/core.py", line 680, in main
rv = self.invoke(ctx)
File "/usr/lib/python2.7/site-packages/click/core.py", line 1027, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 1027, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python2.7/site-packages/click/core.py", line 873, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python2.7/site-packages/click/core.py", line 508, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/click/decorators.py", line 16, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/curator/cli/index_selection.py", line 95, in indices
working_list = apply_filter(working_list, **f)
File "/usr/lib/python2.7/site-packages/curator/api/filter.py", line 114, in apply_filter
p = re.compile(pattern)
File "/usr/lib64/python2.7/re.py", line 190, in compile
return _compile(pattern, flags)
File "/usr/lib64/python2.7/re.py", line 242, in _compile
raise error, v # invalid expression
sre_constants.error: bogus escape: '\2'
[root@test3 elasticsearch]#


#4

So, I did this:
curator delete indices --regex ^logstash-.*
and restarted elasticsearch but now I do not see anything in Kibana, even though, elasticsearch log shows this:

[2015-11-07 17:16:38,318][INFO ][node ] [Crusher] started
[2015-11-07 17:16:38,752][INFO ][gateway ] [Crusher] recovered [2] indices into cluster_state
[2015-11-07 17:16:38,752][INFO ][cluster.service ] [Crusher] added {[logstash-test3.kartikv.com-818-11630][X15apYWdRlGRqgcJxOGjvA][test3.kartikv.com][inet[/192.168.1.51:9301]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-test3.kartikv.com-818-11630][X15apYWdRlGRqgcJxOGjvA][test3.kartikv.com][inet[/192.168.1.51:9301]]{data=false, client=true}])


#5

a system reboot fixed it. I'd like to be able to fix this w/o a reboot. This is what was in logstash logs:
{:timestamp=>"2015-11-07T17:23:52.769000-0500", :message=>["INFLIGHT_EVENTS_REPORT", "2015-11-07T17:23:52-05:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}


#6

So I did not read the curator examples and documentation as well as I ought to have, I put this in a a script and put it in cron:

#!/bin/bash
curator --host localhost delete indices --older-than 4 --timestring '%Y.%m.%d' --time-unit days --prefix logstash


(Niraj Kumar) #7

So did this work for you.


(system) #8