Filebeat logs stored in /tmp are causing pod eviction

Hi all,
i am seeing an issue on my environment which is using filebeat for logging and monitoring where files with large amounts of storage used are getting created. I presume these are log files created by filebeat and these files are getting stored in my pods /tmp directory. I am using openshift 4.11.9 with kubernetes 1.24.

total 680892
-rw-r--r--. 1 1000680000 1000680000 45518518 Mar  7 01:35 monitoring-stdout---supervisor-jti9vqc6.log
-rw-r-----. 1 1000680000 1000680000  5803121 Mar  7 01:35 pch4543779432628550477.history.gz
-rw-r-----. 1 1000680000 1000680000 54381482 Mar  7 01:35 pch4543779432628550477.history
-rw-r-----. 1 1000680000 1000680000  2185531 Mar  7 01:35 pch6805951925590054337.history
drwxrwsrwx. 4 root       1000680000     4096 Mar  7 01:28 .
-rw-r--r--. 1 1000680000 1000680000 52428996 Mar  7 01:28 monitoring-stdout---supervisor-jti9vqc6.log.1
-rw-r-----. 1 1000680000 1000680000    72101 Mar  7 01:26 pch6805951925590054337.history.gz
-rw-r--r--. 1 1000680000 1000680000 52428802 Mar  7 01:20 monitoring-stdout---supervisor-jti9vqc6.log.2
-rw-r--r--. 1 1000680000 1000680000 52428984 Mar  7 01:12 monitoring-stdout---supervisor-jti9vqc6.log.3
-rw-r--r--. 1 1000680000 1000680000 52428907 Mar  7 01:04 monitoring-stdout---supervisor-jti9vqc6.log.4
-rw-r--r--. 1 1000680000 1000680000 52429189 Mar  7 00:55 monitoring-stdout---supervisor-jti9vqc6.log.5
-rw-r--r--. 1 1000680000 1000680000 52428807 Mar  7 00:47 monitoring-stdout---supervisor-jti9vqc6.log.6
-rw-r--r--. 1 1000680000 1000680000 52428867 Mar  7 00:39 monitoring-stdout---supervisor-jti9vqc6.log.7
-rw-r--r--. 1 1000680000 1000680000 52429055 Mar  7 00:31 monitoring-stdout---supervisor-jti9vqc6.log.8
-rw-r--r--. 1 1000680000 1000680000 52428920 Mar  7 00:23 monitoring-stdout---supervisor-jti9vqc6.log.9
-rw-r--r--. 1 1000680000 1000680000 52428974 Mar  7 00:15 monitoring-stdout---supervisor-jti9vqc6.log.10
-rw-r-----. 1 1000680000 1000680000   173586 Mar  6 23:41 pch783860385935287176.history.gz
-rw-r-----. 1 1000680000 1000680000 11674655 Mar  6 23:41 pch7390231010312236403.history.gz
drwxr-s---. 2 1000680000 1000680000     4096 Mar  6 19:41 axis2-tmp-597419792942874660.tmp
-rw-r-----. 1 1000680000 1000680000        0 Mar  6 19:41 axis2-tmp-597419792942874660.tmp.lck
drwxrwxrwt. 4 1000680000 root             64 Mar  6 19:40 .com_ibm_tools_attach
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-syslog-stderr---supervisor-j80_3v5j.log
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-syslog-stdout---supervisor-hq39mt_v.log
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-was-stderr---supervisor-en73_cgc.log
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-was-stdout---supervisor-v0comu9p.log
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-cpe-stderr---supervisor-v8837rxx.log
-rw-------. 1 1000680000 1000680000        0 Mar  6 19:39 filebeat-cpe-stdout---supervisor-_0l469zf.log
-rw-r--r--. 1 1000680000 1000680000        3 Mar  6 19:39 supervisord.pid
srwx------. 1 1000680000 1000680000        0 Mar  6 19:39 supervisor.sock
dr-xr-xr-x. 1 root       root             28 Mar  6 19:39 ..

As you can see these are the files being created and its the files starting from monitoring-*.log that i am concerned about.
i was wondering if there were any solutions that could be provided to any of the following points

  1. Reduce the file size
  2. change the directory from /tmp to some other directory
  3. Reduce the amount of time these files take to wrap the content around each other.

any thoughts on this would be gladly appreciated