Is hsperfdata file critical?

Hi,

Recently I started getting /tmp directory full error although it's not an individual partition.

root@ubuntu:~# df -h
Filesystem                      Size  Used Avail Use% Mounted on
udev                            7.9G  4.0K  7.9G   1% /dev
tmpfs                           1.6G  700K  1.6G   1% /run
/dev/mapper/ubuntu--vg-root   38G   22G   14G  62% /
none                            4.0K     0  4.0K   0% /sys/fs/cgroup
none                            5.0M     0  5.0M   0% /run/lock
none                            7.9G     0  7.9G   0% /run/shm
none                            100M     0  100M   0% /run/user
**overflow                        1.0M  1.0M     0 100% /tmp**
/dev/sda1                       236M   44M  180M  20% /boot

There were some files named "hsperfdata_elasticsearch", "hsperfdata_root" and "hsperfdata_logstash". I checked their size but it was zero.

root@ubuntu:~# du -sh /tmp/*
32K	/tmp/hsperfdata_elasticsearch
0	/tmp/hsperfdata_logstash
0	/tmp/hsperfdata_root
0	/tmp/jna--1985354563
884K	/tmp/mkinitramfs_jf2yUF
0	/tmp/mkinitramfs-OL_uoE6Jd

I did a graceful system reboot and the issue got fixed.

I know that hsperfdata files are generated by Java but are they kinda critical for ELK stack operations? Why would it cause /tmp to go into overflow state in df command output?

Hey Doremon,

Would you be able to find any answers on this? I am also facing the same issue. Because of this all the queries on elasticsearch are failing. I had to restart the elasticsearch service to fix this.

I want to understand, if I can set a cronjob to clean this file after couple of days, would that have any ill impact on cluster?

Hey,

I am confused, how can df show /tmp but is not in individual partition? Are you sure you did not add /tmp as a ramdisk? If so, 1 megabyte is really small. I just checked on an old Ubuntu LTS installation and that one looks fine.

If you check the /tmp contents, then a file size of 32kb should not really be a problem compared to the initramfs size of the other file, buts lets leave it as is.

The hsperfdata file is needed for tools like jstat to do some analysis and regurlarly updated by the JVM. A couple of workarounds:

  • Change the java.io.tmpdir path via a system property and point it to another directory without space issues
  • Disable gathering those stats via a JVM option called -XX:-UsePerfData
  • There is another option mentioned in this lengthy but awesome blog post, how this feature turned out to be a performance issue, you should take the time to read it, it is really interesting
  • Have a bigger /tmp dir, as there might be a fair share of other applications trying to write into it and fail, which might lead to processes exiting, check your logfiles

--Alex

As I mentioned I did a graceful restart os system and the issue went away. Also df wasn't showing /tmp as a partition anymore.