Data loss in docker based elasticsearch at random index

I am using elasticsearch v8.13.2 with docker and single node. I have been facing loss of docs at random indices. this used to get resolved automatically earlier on increasing the ram size, but at current stage that is also not helping.

I am using this command for docker -

docker run -d --restart unless-stopped \
  --name elasticsearch \
  --net elastic \
  -p 9200:9200 \
  -p 9300:9300 \
  -e "discovery.type=single-node" \
  -e "xpack.security.enabled=false" \
  -e "xpack.security.audit.enabled=true" \
  -e "xpack.security.audit.logfile.events.emit_request_body=true" \
  -e "logger.org.elasticsearch.transport=TRACE" \
  -m 2GB \
  -v elasticsearch-data:/usr/share/elasticsearch/data \
  -v /home/elasticsearch/backup:/home/elasticsearch/backup \
  -v /var/log/elasticsearch:/usr/share/elasticsearch/logs \
  --log-opt max-size=200m \
  --log-opt max-file=5 \
  docker.elastic.co/elasticsearch/elasticsearch:8.13.2

docker logs shows this type of exception -

{"@timestamp":"2024-10-23T13:10:45.030Z", "log.level": "WARN", "message":"Received response for a request that has timed out, sent [1.1m/69846ms] ago, timed out [54.8s/54836ms] ago, action [indices:monitor/stats[n]], node [{*****}{kn5WbDj1SNmnnzaEdYtAQA}{*****}{***}{172.18.0.2}{172.18.0.2:9300}{***}{8.13.2}{7000099-8503000}{ml.machine_memory=23622320128, ml.allocated_processors=32, ml.allocated_processors_double=32.0, ml.max_jvm_size=11811160064, ml.config_version=12.0.0, xpack.installed=true, transform.config_version=10.0.0}], id [105027]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[e6f71666ef1c][management][T#4]","log.logger":"org.elasticsearch.transport.TransportService","elasticsearch.cluster.uuid":"rFFNxXCdSuWNql4Ue33KCQ","elasticsearch.node.id":"kn5WbDj1SNmnnzaEdYtAQA","elasticsearch.node.name":"e6f71666ef1c","elasticsearch.cluster.name":"docker-cluster"}