Disk space getting filled up

Hello,

I'm running out of disk volume even though my indices shows less amt of space. i.e I have something like 250GB out of which I can see 8 gb of indices. but, its occupying almost 150+GB/250 GB space of stuff. I believe they are translog's. I wanted to clean up this before I run out of space. Please advice me what needs to be done and steps that has to be taken to overcome this issue.

Thanks,
Joshua.

How did you obtain this number? Could you share the command you used and its output?

You should be able to see the disk usage of each shard copy, including its translog, using the following:

GET /_cat/shards?bytes=b&h=index,shard,ip,store

Does this add up to a number that's close to your actual disk usage?

monitoring-kibana-6-2018.06.24   0 127.0.0.1    2538335
.monitoring-es-6-2018.06.27       0 127.0.0.1  324728961
filebeat-6.2.4-2018.06.27         2 127.0.0.1      33830
filebeat-6.2.4-2018.06.27         2                     
filebeat-6.2.4-2018.06.27         3 127.0.0.1      71179
filebeat-6.2.4-2018.06.27         3                     
filebeat-6.2.4-2018.06.27         1 127.0.0.1      19069
filebeat-6.2.4-2018.06.27         1                     
filebeat-6.2.4-2018.06.27         4 127.0.0.1      36193
filebeat-6.2.4-2018.06.27         4                     
filebeat-6.2.4-2018.06.27         0 127.0.0.1      51323
filebeat-6.2.4-2018.06.27         0                     
.monitoring-logstash-6-2018.06.27 0 127.0.0.1    1708722
filebeat-6.2.4-2018.06.25         2 127.0.0.1     235279
filebeat-6.2.4-2018.06.25         2                     
filebeat-6.2.4-2018.06.25         3 127.0.0.1     111263
filebeat-6.2.4-2018.06.25         3                     
filebeat-6.2.4-2018.06.25         1 127.0.0.1     183353
filebeat-6.2.4-2018.06.25         1                     
filebeat-6.2.4-2018.06.25         4 127.0.0.1     186461
filebeat-6.2.4-2018.06.25         4                     
filebeat-6.2.4-2018.06.25         0 127.0.0.1     140997
filebeat-6.2.4-2018.06.25         0                     
filebeat-6.3.0-2018.06.26         2 127.0.0.1  149994151
filebeat-6.3.0-2018.06.26         2                     
filebeat-6.3.0-2018.06.26         1 127.0.0.1  149168343
filebeat-6.3.0-2018.06.26         1

@DavidTurner,

I see bunch of indices with translog's. I need to get rid of the extra space that translogs are using and I would also need help to avoid this scenario as such I dont run out of disk volume.
I know curator deletes indices, but I was just wondering whether it can delete translogs too. can you help me out please ?

Thanks,
Joshua.

The output you shared totals 629207459 bytes, which is ~600MB, and this includes translogs. Can you explain how you obtained the "8GB" figure in your original post? And also describe in more detail why you think that there are 150GB+ of translogs to clean up?

Translogs are an essential part of an Elasticsearch index; deleting an index deletes its translog.

@DavidTurner ,

Thanks for the response,

as you see my screenshot, it shows that it occupies 5.4 gb of indices. however, initially I was having atleast 170 gb of free space. but, now its just came to 35 % . I use the space all alone, and I have been encountering the same issue from few days, Earlier, i thought it was indices volume that is hampering my disk. But, eventually I came to know that, there could be some other reason that is occupying more space.
What could be the reason and how do I resolve this issue. ??

Thanks,
Joshua

Could you look at the files on disk and determine the paths of the things that are taking up all the space? I do not think it's Elasticsearch data: the "disk available" reported here is, I think, whatever the operating system reports.

@DavidTurner ,

I can say that, as soon as I stop logstash/elasticsearch services, there are no signs of the disk getting filled up. But, whenever any events are being ingested to logstash and indices are created this is happening(i.e conflicting between indices size and the filesystem size). I have searched quite no of blogs and articles and I could not find any accurate solution for this.

You're in the right place to find a solution, but we cannot start to think about a solution before we've identified what the problem is, and for that you are going to need to share a lot more information than you're currently doing.

I asked for this:

Please could you do this? For instance, if you're on a unix-like system (Mac OS or Linux) then please run du /path/to/mount/point/of/disk (replacing the path with the correct path, obviously) and share the output via https://gist.github.com or similar?

@DavidTurner,

I identified that, logstash-stdout.log is hogging more space due to multiple errors.

I appreciate your help.

Thanks,
Joshua.

1 Like

Good work - that sounds like it'd explain what you were seeing :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.