Elasticsearch creating too many directories in /tmp

I am running into an issue since upgrading elasticsearch from 6.7.2 to 7.5.2, where elasticsearch is now creating 1000's of files in /tmp on the host machines.
Filenames are elasticsearch-* and controller_log_*
Elasticsearch instances are not being restarted 1000's of times, and I have been unable to determine where these files are being created.

    prw-------  1 elasticsearch    elasticsearch          0 May 26 19:03 controller_log_99392
    prw-------  1 elasticsearch    elasticsearch          0 May 26 17:18 controller_log_99520
    prw-------  1 elasticsearch    elasticsearch          0 May 26 19:03 controller_log_99621
    prw-------  1 elasticsearch    elasticsearch          0 May 26 18:14 controller_log_9965
    prw-------  1 elasticsearch    elasticsearch          0 May 26 17:18 controller_log_99761
    prw-------  1 elasticsearch    elasticsearch          0 May 27 09:20 controller_log_9987
    prw-------  1 elasticsearch    elasticsearch          0 May 26 17:18 controller_log_99958

and

    drwx------  2 elasticsearch    elasticsearch       4096 May 27 04:13 elasticsearch-9900697386538499675
    drwx------  2 elasticsearch    elasticsearch       4096 May 26 15:59 elasticsearch- 9906081905532042965
    drwx------  2 elasticsearch    elasticsearch       4096 May 26 20:32 elasticsearch-9934128230626817962
    drwx------  2 elasticsearch    elasticsearch       4096 May 26 19:38 elasticsearch-9959566056361653972
    drwx------  2 elasticsearch    elasticsearch       4096 May 27 04:42 elasticsearch-9967643642543549351
    drwx------  2 elasticsearch    elasticsearch       4096 May 26 23:50 elasticsearch-9987506565151338674
    drwx------  2 elasticsearch    elasticsearch       4096 May 26 16:18 elasticsearch-999953937332847863

Here are the counts of the files from one node that was cleared out completely yesterday.

    ls -lah | grep controller_log | wc -l
    1103

    ls -lah | grep elasticsearch- | wc -l
    1104

Output from ps -ef

    ps -ef | grep elastic
    496      135301      1 80 Mar17 ?        56-21:38:51 /usr/java/openjdk-11_28/jdk-11/bin/java - Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms30500m -Xmx30500m -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/es-data0/es_gc_%t_%p.log:utctime,pid,tags:filecount=5,filesize=64m -Djna.tmpdir=/home/elasticsearch/jnatmp -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1HeapRegionSize=16m -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=5 -XX:ConcGCThreads=5 -server -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.type=unpooled -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -verbose:gc -Djdk.tls.ephemeralDHKeySize=2048 -XX:MaxDirectMemorySize=15997075456 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch/es-data0 -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch-es-data0/elasticsearch-es-data0.pid -d
    496      135733      1 92 Mar17 ?        65-16:46:49 /usr/java/openjdk-11_28/jdk-11/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.numDirectArenas=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.locale.providers=COMPAT -Xms30500m -Xmx30500m -Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/es-data1/es_gc_%t_%p.log:utctime,pid,tags:filecount=5,filesize=64m -Djna.tmpdir=/home/elasticsearch/jnatmp -XX:+UseG1GC -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1HeapRegionSize=16m -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=5 -XX:ConcGCThreads=5 -server -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dio.netty.allocator.type=unpooled -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -XX:+HeapDumpOnOutOfMemoryError -verbose:gc -Djdk.tls.ephemeralDHKeySize=2048 -XX:MaxDirectMemorySize=15997075456 -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch/es-data1 -Des.distribution.flavor=default -Des.distribution.type=rpm -Des.bundled_jdk=true -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch-es-data1/elasticsearch-es-data1.pid -d
    496      136037 135301  0 Mar17 ?        00:00:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
    496      136701 135733  0 Mar17 ?        00:00:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

X-pack machine learning is supposed to be disabled.

    xpack.ml.enabled: false
    xpack.monitoring.enabled: false
    xpack.watcher.enabled: false

Is it causing an issue?

Yes it is causing an issue, it has caused iNodes to be used up.

This is absolutely causing an issue. and I need to know how to stop Elasticsearch from creating 1000's of directories and files per day in the /tmp directory... It makes no sense as to why elasticsearch would need to recreate the same controller_log_* and elasticsearch-* repeatedly.

This should never be considered appropriate behavior for a service

Anyone else able to chime in on this?
The only response has been asking if this is causing a problem, which YES it is.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.