Hi,
I have a question regarding memory sizing for an elasticsearch docker container. If a node requires 8GB of HEAP then it is normal to provision 16GB of RAM for filesystem caching. When running elasticsearch in docker and mounting indexing volumes from the host is the filesystem cache maintained by the container or the host? The answer will dictate how much memory I should allocate to the container...
if you set a memory limit in the Docker container, this also limits its access to the hosts file system cache (see also a related comment on Github for details). So we'd advise against setting a memory limit in the container.
The container does not have its own kernel, it is shared with the host. Thus, the cache that is used here is the cache of the host (and it's the only filesystem cache in play).
Ok, I'm not setting any memory limits so that's fine. What happens if the index volume is mounted directly in the container rather than on the host itself?
That is not an architecture that we recommend because it entails losing the volume when the container is destroyed (as would be the case when upgrading to a new version). For non-ephemeral data, the volume should be bind mounted to the container.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.