I have a docker container with a volume mount for /var/log/elasticsearch/ and /var/lib/elasticsearch/ so that Elasticsearch will write to disk, as the container seems to be running out of space after several hours of uptime.
I check the local volume-mounted directories, and no data has been written to them. I've tried changing the ownership to 1000:1000. Should I be mounting a different directory? Do I need to enable anything in the config file?
Right, I volume mount my directories to /var/log/elasticsearch/ and /var/lib/elasticsearch/, which are the defaults and these are what elasticsearch should be writing to inside the container.
I don't think those are the defaults, no, but it does depend on exactly how you're starting Elasticsearch (what image you're using, etc.). It's going to be simplest to set path.data explicitly.
So this worked for path.data but setting path.logs doesn't work, as inside the container the gc.log is being written to, but it's not reflected in the volume mounted directory on my local host. Is it possible to write logs to disk from docker?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.