Why my elasticsearch request a lot of cache size when it's starting?

Hi, there.

I'm configuring elastic stack with 3 nodes using Virtual Box.
Each of VMs has 5GB memories, 4 CPUs and 30GB of space.
Once I finished creating cluster, at some point in time, I faced a fatal exception every time I tried to restart the elasticsearch like below.
I'm using default setting of jvm.options file so it should be 4GB and I don't have any manual configuration related to the cache size in elasticsearch.yml. Could someone suggest the reason that I can't start my elasticsearch?

  1. Version of elasticsearch : 8.14

  2. Disk usage
    Filesystem Size Used Avail Use% Mounted on
    devtmpfs 2.2G 0 2.2G 0% /dev
    tmpfs 2.2G 0 2.2G 0% /dev/shm
    tmpfs 2.2G 8.5M 2.2G 1% /run
    tmpfs 2.2G 0 2.2G 0% /sys/fs/cgroup
    /dev/mapper/rl-root 27G 4.8G 23G 18% /
    /dev/sda1 1014M 199M 816M 20% /boot
    tmpfs 446M 0 446M 0% /run/user/1000

  3. Disk usage under elasticsearch directory
    51M ./lib
    76K ./config
    3.3M ./bin
    283M ./jdk
    759M ./modules
    0 ./plugins
    220K ./logs
    1.9M ./data
    1.1G .

  4. Error message

[2024-07-01T08:46:56,569][ERROR][o.e.b.Elasticsearch      ] [node1] fatal exception while booting E
lasticsearch
java.io.UncheckedIOException: java.io.IOException: Not enough free space [23789723648] for cache fi
le of size [26021462016] in path [/home/elastic/elastic/data]
	at org.elasticsearch.blobcache.shared.SharedBlobCacheService.<init>(SharedBlobCacheService.
java:377) ~[?:?]
	at org.elasticsearch.blobcache.shared.SharedBlobCacheService.<init>(SharedBlobCacheService.
java:336) ~[?:?]
	at org.elasticsearch.xpack.searchablesnapshots.SearchableSnapshots.createComponents(Searcha
bleSnapshots.java:335) ~[?:?]
	at org.elasticsearch.node.NodeConstruction.lambda$construct$13(NodeConstruction.java:816) ~
[elasticsearch-8.14.1.jar:?]
	at org.elasticsearch.plugins.PluginsService.lambda$flatMap$1(PluginsService.java:253) ~[ela
sticsearch-8.14.1.jar:?]
	at java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:288) ~[?:?]
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:212) ~[?:?]
	at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) ~
[?:?]
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:556) ~[?:?]
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:546) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:622) ~[?:?]
	at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:291) ~[?:?]
	at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:631) ~[?:?]
	at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:637) ~[?:?]
	at java.util.stream.ReferencePipeline.toList(ReferencePipeline.java:642) ~[?:?]
	at org.elasticsearch.node.NodeConstruction.construct(NodeConstruction.java:816) ~[elasticse
arch-8.14.1.jar:?]
	at org.elasticsearch.node.NodeConstruction.prepareConstruction(NodeConstruction.java:266) ~
[elasticsearch-8.14.1.jar:?]
	at org.elasticsearch.node.Node.<init>(Node.java:192) ~[elasticsearch-8.14.1.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:240) ~[elasticsear
ch-8.14.1.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:240) ~[elasticse
arch-8.14.1.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:75) ~[elasticsearch-8.
14.1.jar:?]
Caused by: java.io.IOException: Not enough free space [23789723648] for cache file of size [2602146
2016] in path [/home/elastic/elastic/data]
	at org.elasticsearch.blobcache.shared.SharedBytes.findCacheSnapshotCacheFilePath(SharedByte
s.java:138) ~[?:?]
	at org.elasticsearch.blobcache.shared.SharedBytes.<init>(SharedBytes.java:80) ~[?:?]
	at org.elasticsearch.blobcache.shared.SharedBlobCacheService.<init>(SharedBlobCacheService.
java:374) ~[?:?]
	... 20 more
  1. elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ryu-elastic
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node1
node.roles: [ master, data_frozen ]
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: elastic1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["elastic1", "elastic2","elastic3"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 30-06-2024 13:28:07
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elastic1"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: elastic1

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
transport.host: elastic1

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

I just solved this problem on my own by adding a configuration on elasticsearch.yml.
I guess there are more things to configure it specifically, this was enough to test a cluster for me.

xpack.searchable.snapshot.shared_cache.size: 30%

That should do it, but if you're using Enterprise features like Searchable Snapshots it's probably best to open a support case rather than asking here.