High load at midnight caused by ES docker containers while no usage occurs

We are running a somewhat untypical setup (not your usual production elasticsearch usage) of around 40 very small (single node) elasticsearch docker instances (version 7.17.3) per server.
Most instances contain very little data even the biggest ones are only a few hundred MB in disk size.

Every night at midnight (local time) we experience extremely high load caused by the ES docker containers. For about 20 seconds the CPUs are saturated and RAM is maxed out, then for about 10 seconds high disk write occurs.
The load has been high enough to occasionally freeze our (reasonably sized) server for a few minutes. The only workaround we have found is to spread out our applications to more servers but of course otherwise the servers are moreless idle.

Are there any daily cleanup jobs in elasticsearch (similar to postgres auto vacuum etc.) which are triggered by default at midnight?
If yes - is it possible to configure a different default start time (which would solve our issues)?
If no - how to find the culprit? Is there some kind of monitoring log which can be activated to catch anything causing such high load inside the ES containers?

A example monitoring graph from one of our severs.

This is the exact docker configuration we are using:

  elasticsearch:
    # https://www.docker.elastic.co/r/elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.3
    restart: unless-stopped
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
      - http.cors.enabled=true
      - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
      - http.cors.allow-credentials=true
      - cluster.routing.allocation.disk.threshold_enabled=false # Disable watermark check (safeguard which sets write only indexes on low disk space)
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./data/elastic01:/usr/share/elasticsearch/data

Welcome to our community! :smiley:

Not generically.

This is super small and it doesn't make sense to have 40 nodes of this side, what's the reason for it?

If you are doing ILM with Force Merge etc... it could be running each day around the same time... that can consume resources...

Ahh my good friend Postgres's vacuuming.... :slight_smile:

Thanks for your answers. It was a long shot, but good to know it should not be something built into elasticsearch by default (I don't even know what ILM is so I assume we arn't using it).

Regarding the high number of nodes and memory limitations: We are basically running one docker container (and elasticsearch service container) per customer due to different security and privacy concerns (and because multi-tenancy is not yet implemented). As this is a customer onboarding/trial environment we did not have any problems with the memory settings (before elastic containers occupied quiete a lot of memory).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.