Error: failed to perform any bulk index operations: 429 Too Many Requests

Hello,

I know that there is already a lot of post on (github | stackoverflow | here) about this error and how to fix it but despite the reading of all of these ones i was not able to get rid of this error:

Here is a snapshot of output error log part received from Elastic agent (file: /opt/Elastic/Agent/elastic-agent-20230320-2.ndjson):

"message":"failed to perform any bulk index operations: 429 Too Many Requests: {\"error\":{\"root_cause\":[{\"type\":\"es_rejected_execution_exception\",\"reason\":\"rejected execution of coordinating operation [coordinating_and_primary_bytes=0, replica_bytes=0, all_bytes=0, coordinating_operation_bytes=68295401, max_coordinating_and_primary_bytes=53687091]\"}]

First here i think there is a bug with the coordinating_operation_bytes value displayed in the error log:

  1. The coordinating_operation_bytes should be change (decrease) overtime (every index.refresh_interval define with http://localhost:9200_Settings/index API Request). Here the value is stuck to this value: 68295401
  2. When i go to the indexing pressure stats here: http://localhost:9200/_nodes/h6tFiMKbT2mGpu7UGtdDlQ/stats/indexing_pressure/ i see that the coordinating_in_bytes is at 1029616448 bytes ( ~ 1Gb) while it should be less than the limit_in_byte value which is 53687091 byte.

This article well explain the problem.

Does somebody knows how i can delete the memory load in coordinating_in_bytes to reduce it to an acceptable value ?

Do you think the Bulk api can help me ?

I've tried this:

curl -u elastic:changeme -X POST "localhost:9200/_bulk?pretty" -H 'Content-Type: application/json' -d'
{ "delete" : { "_index" : ".ds-logs-system.syslog-default-2023.03.18-000001", "_id": "SAqYrfVtSzmMlkDZ0qtXXQ" } }

I arrived to delete a specific _id one by one. But i don't know how to deletes them all. And do you think it could resolve my issue above ?

Here is a view of the /_cat/_indices: (system index handle 169 mb of document):

Please don't post pictures of text, logs or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

Basically a 429 means your Elasticsearch cluster is overloaded. What is the output from the _cluster/stats?pretty&human API?

Hello,

Thanks for your response,

Sorry about posting picture, i won't do that anymore

So the output that i've got from this command is this one:

{
  "_nodes" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "u6F_VMWXQVujhpDPFdx3Zw",
  "timestamp" : 1679389138708,
  "status" : "yellow",
  "indices" : {
    "count" : 44,
    "shards" : {
      "total" : 44,
      "primaries" : 44,
      "replication" : 0.0,
      "index" : {
        "shards" : {
          "min" : 1,
          "max" : 1,
          "avg" : 1.0
        },
        "primaries" : {
          "min" : 1,
          "max" : 1,
          "avg" : 1.0
        },
        "replication" : {
          "min" : 0.0,
          "max" : 0.0,
          "avg" : 0.0
        }
      }
    },
    "docs" : {
      "count" : 97096,
      "deleted" : 2417
    },
    "store" : {
      "size" : "82.7mb",
      "size_in_bytes" : 86802709,
      "total_data_set_size" : "82.7mb",
      "total_data_set_size_in_bytes" : 86802709,
      "reserved" : "0b",
      "reserved_in_bytes" : 0
    },
    "fielddata" : {
      "memory_size" : "432b",
      "memory_size_in_bytes" : 432,
      "evictions" : 0
    },
    "query_cache" : {
      "memory_size" : "5.8kb",
      "memory_size_in_bytes" : 5960,
      "total_count" : 148,
      "hit_count" : 5,
      "miss_count" : 143,
      "cache_size" : 5,
      "cache_count" : 5,
      "evictions" : 0
    },
    "completion" : {
      "size" : "0b",
      "size_in_bytes" : 0
    },
    "segments" : {
      "count" : 190,
      "memory" : "0b",
      "memory_in_bytes" : 0,
      "terms_memory" : "0b",
      "terms_memory_in_bytes" : 0,
      "stored_fields_memory" : "0b",
      "stored_fields_memory_in_bytes" : 0,
      "term_vectors_memory" : "0b",
      "term_vectors_memory_in_bytes" : 0,
      "norms_memory" : "0b",
      "norms_memory_in_bytes" : 0,
      "points_memory" : "0b",
      "points_memory_in_bytes" : 0,
      "doc_values_memory" : "0b",
      "doc_values_memory_in_bytes" : 0,
      "index_writer_memory" : "0b",
      "index_writer_memory_in_bytes" : 0,
      "version_map_memory" : "0b",
      "version_map_memory_in_bytes" : 0,
      "fixed_bit_set" : "3.1kb",
      "fixed_bit_set_memory_in_bytes" : 3184,
      "max_unsafe_auto_id_timestamp" : 1679387337857,
      "file_sizes" : { }
    },
    "mappings" : {
      "total_field_count" : 11237,
      "total_deduplicated_field_count" : 11237,
      "total_deduplicated_mapping_size" : "83.9kb",
      "total_deduplicated_mapping_size_in_bytes" : 85944,
      "field_types" : [
        {
          "name" : "alias",
          "count" : 14,
          "index_count" : 2,
          "script_count" : 0
        },
        {
          "name" : "boolean",
          "count" : 215,
          "index_count" : 25,
          "script_count" : 0
        },
        {
          "name" : "constant_keyword",
          "count" : 103,
          "index_count" : 24,
          "script_count" : 0
        },
        {
          "name" : "date",
          "count" : 285,
          "index_count" : 27,
          "script_count" : 0
        },
        {
          "name" : "double",
          "count" : 49,
          "index_count" : 2,
          "script_count" : 0
        },
        {
          "name" : "flattened",
          "count" : 47,
          "index_count" : 1,
          "script_count" : 0
        },
        {
          "name" : "float",
          "count" : 76,
          "index_count" : 5,
          "script_count" : 0
        },
        {
          "name" : "geo_point",
          "count" : 14,
          "index_count" : 5,
          "script_count" : 0
        },
        {
          "name" : "integer",
          "count" : 3,
          "index_count" : 1,
          "script_count" : 0
        },
        {
          "name" : "ip",
          "count" : 200,
          "index_count" : 24,
          "script_count" : 0
        },
        {
          "name" : "keyword",
          "count" : 5874,
          "index_count" : 27,
          "script_count" : 0
        },
        {
          "name" : "long",
          "count" : 2111,
          "index_count" : 22,
          "script_count" : 0
        },
        {
          "name" : "match_only_text",
          "count" : 97,
          "index_count" : 13,
          "script_count" : 0
        },
        {
          "name" : "nested",
          "count" : 18,
          "index_count" : 2,
          "script_count" : 0
        },
        {
          "name" : "object",
          "count" : 1758,
          "index_count" : 26,
          "script_count" : 0
        },
        {
          "name" : "scaled_float",
          "count" : 45,
          "index_count" : 6,
          "script_count" : 0
        },
        {
          "name" : "short",
          "count" : 206,
          "index_count" : 2,
          "script_count" : 0
        },
        {
          "name" : "text",
          "count" : 98,
          "index_count" : 27,
          "script_count" : 0
        },
        {
          "name" : "version",
          "count" : 1,
          "index_count" : 1,
          "script_count" : 0
        },
        {
          "name" : "wildcard",
          "count" : 23,
          "index_count" : 4,
          "script_count" : 0
        }
      ],
      "runtime_field_types" : [ ]
    },
    "analysis" : {
      "char_filter_types" : [ ],
      "tokenizer_types" : [ ],
      "filter_types" : [ ],
      "analyzer_types" : [ ],
      "built_in_char_filters" : [ ],
      "built_in_tokenizers" : [ ],
      "built_in_filters" : [ ],
      "built_in_analyzers" : [ ]
    },
    "versions" : [
      {
        "version" : "8.6.2",
        "index_count" : 44,
        "primary_shard_count" : 44,
        "total_primary_size" : "82.7mb",
        "total_primary_bytes" : 86802709
      }
    ],
    "search" : {
      "total" : 1030,
      "queries" : {
        "match_phrase" : 40,
        "bool" : 996,
        "terms" : 358,
        "prefix" : 1,
        "match" : 89,
        "match_all" : 1,
        "range" : 427,
        "exists" : 525,
        "term" : 641,
        "nested" : 1,
        "simple_query_string" : 89
      },
      "sections" : {
        "search_after" : 1,
        "runtime_mappings" : 1,
        "query" : 1016,
        "terminate_after" : 1,
        "_source" : 46,
        "pit" : 61,
        "collapse" : 7,
        "aggs" : 130
      }
    }
  },
  "nodes" : {
    "count" : {
      "total" : 1,
      "coordinating_only" : 0,
      "data" : 1,
      "data_cold" : 1,
      "data_content" : 1,
      "data_frozen" : 1,
      "data_hot" : 1,
      "data_warm" : 1,
      "index" : 0,
      "ingest" : 1,
      "master" : 1,
      "ml" : 1,
      "remote_cluster_client" : 1,
      "search" : 0,
      "transform" : 1,
      "voting_only" : 0
    },
    "versions" : [
      "8.6.2"
    ],
    "os" : {
      "available_processors" : 16,
      "allocated_processors" : 16,
      "names" : [
        {
          "name" : "Linux",
          "count" : 1
        }
      ],
      "pretty_names" : [
        {
          "pretty_name" : "Ubuntu 20.04.5 LTS",
          "count" : 1
        }
      ],
      "architectures" : [
        {
          "arch" : "amd64",
          "count" : 1
        }
      ],
      "mem" : {
        "total" : "13.3gb",
        "total_in_bytes" : 14328877056,
        "adjusted_total" : "13.3gb",
        "adjusted_total_in_bytes" : 14328877056,
        "free" : "2.5gb",
        "free_in_bytes" : 2780233728,
        "used" : "10.7gb",
        "used_in_bytes" : 11548643328,
        "free_percent" : 19,
        "used_percent" : 81
      }
    },
    "process" : {
      "cpu" : {
        "percent" : 1
      },
      "open_file_descriptors" : {
        "min" : 730,
        "max" : 730,
        "avg" : 730
      }
    },
    "jvm" : {
      "max_uptime" : "30.3m",
      "max_uptime_in_millis" : 1818692,
      "versions" : [
        {
          "version" : "19.0.2",
          "vm_name" : "OpenJDK 64-Bit Server VM",
          "vm_version" : "19.0.2+7-44",
          "vm_vendor" : "Oracle Corporation",
          "bundled_jdk" : true,
          "using_bundled_jdk" : true,
          "count" : 1
        }
      ],
      "mem" : {
        "heap_used" : "312.9mb",
        "heap_used_in_bytes" : 328202752,
        "heap_max" : "512mb",
        "heap_max_in_bytes" : 536870912
      },
      "threads" : 144
    },
    "fs" : {
      "total" : "109.6gb",
      "total_in_bytes" : 117726900224,
      "free" : "44.8gb",
      "free_in_bytes" : 48107016192,
      "available" : "39.1gb",
      "available_in_bytes" : 42079539200
    },
    "plugins" : [ ],
    "network_types" : {
      "transport_types" : {
        "security4" : 1
      },
      "http_types" : {
        "security4" : 1
      }
    },
    "discovery_types" : {
      "single-node" : 1
    },
    "packaging_types" : [
      {
        "flavor" : "default",
        "type" : "docker",
        "count" : 1
      }
    ],
    "ingest" : {
      "number_of_pipelines" : 320,
      "processor_stats" : {
        "append" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "community_id" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "conditional" : {
          "count" : 244082,
          "failed" : 0,
          "current" : 0,
          "time" : "1.3s",
          "time_in_millis" : 1342
        },
        "convert" : {
          "count" : 162156,
          "failed" : 0,
          "current" : 0,
          "time" : "136ms",
          "time_in_millis" : 136
        },
        "csv" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "date" : {
          "count" : 54052,
          "failed" : 0,
          "current" : 0,
          "time" : "349ms",
          "time_in_millis" : 349
        },
        "dissect" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "dot_expander" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "fingerprint" : {
          "count" : 27026,
          "failed" : 0,
          "current" : 0,
          "time" : "169ms",
          "time_in_millis" : 169
        },
        "foreach" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "geoip" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "grok" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "gsub" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "join" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "json" : {
          "count" : 27026,
          "failed" : 0,
          "current" : 0,
          "time" : "6.4s",
          "time_in_millis" : 6412
        },
        "kv" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "lowercase" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "network_direction" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "pipeline" : {
          "count" : 27026,
          "failed" : 0,
          "current" : 0,
          "time" : "21ms",
          "time_in_millis" : 21
        },
        "remove" : {
          "count" : 135130,
          "failed" : 0,
          "current" : 0,
          "time" : "233ms",
          "time_in_millis" : 233
        },
        "rename" : {
          "count" : 297286,
          "failed" : 0,
          "current" : 0,
          "time" : "324ms",
          "time_in_millis" : 324
        },
        "script" : {
          "count" : 27026,
          "failed" : 0,
          "current" : 0,
          "time" : "41ms",
          "time_in_millis" : 41
        },
        "set" : {
          "count" : 135130,
          "failed" : 0,
          "current" : 0,
          "time" : "61ms",
          "time_in_millis" : 61
        },
        "set_security_user" : {
          "count" : 27026,
          "failed" : 0,
          "current" : 0,
          "time" : "38ms",
          "time_in_millis" : 38
        },
        "split" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "trim" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "uppercase" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "uri_parts" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "urldecode" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        },
        "user_agent" : {
          "count" : 0,
          "failed" : 0,
          "current" : 0,
          "time" : "0s",
          "time_in_millis" : 0
        }
      }
    },
    "indexing_pressure" : {
      "memory" : {
        "current" : {
          "combined_coordinating_and_primary" : "0b",
          "combined_coordinating_and_primary_in_bytes" : 0,
          "coordinating" : "0b",
          "coordinating_in_bytes" : 0,
          "primary" : "0b",
          "primary_in_bytes" : 0,
          "replica" : "0b",
          "replica_in_bytes" : 0,
          "all" : "0b",
          "all_in_bytes" : 0
        },
        "total" : {
          "combined_coordinating_and_primary" : "0b",
          "combined_coordinating_and_primary_in_bytes" : 0,
          "coordinating" : "0b",
          "coordinating_in_bytes" : 0,
          "primary" : "0b",
          "primary_in_bytes" : 0,
          "replica" : "0b",
          "replica_in_bytes" : 0,
          "all" : "0b",
          "all_in_bytes" : 0,
          "coordinating_rejections" : 0,
          "primary_rejections" : 0,
          "replica_rejections" : 0
        },
        "limit" : "0b",
        "limit_in_bytes" : 0
      }
    }
  }
}

That would be why. You have a host with 13 gig of memory and you have half a gig for Elasticsearch, you will want at least 2 gig for Elasticsearch to be effective.

Ok thanks for your response i will search for an answer for that.

I've tried to erase all docker components (including volumes/network/containers) and reinstall them thinking i would get a new fresh start but i was suprised to see the error and all configuration settings/indexes/documents counts/ set like as before...

If you are using docker we would need to see the config you are using.

The docker config sorry.

Ok i've search how to increase the heap size and i came across this article from Elastic:


Then i found how docker start elasticsearch with the following command:
ps -aux | grep "elasticsearch"

nicop       4455 14.7 10.4 10854152 1459408 ?    Sl   09:28   8:05 /usr/share/elasticsearch/jdk/bin/java -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -Djava.security.manager=allow -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j2.formatMsgNoLookups=true -Djava.locale.providers=SPI,COMPAT --add-opens=java.base/java.io=ALL-UNNAMED -Des.cgroups.hierarchy.override=/ -XX:+UseG1GC -Djava.io.tmpdir=/tmp/elasticsearch-14895028605295200778 -XX:+HeapDumpOnOutOfMemoryError -XX:+ExitOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Xms512m -Xmx512m -XX:MaxDirectMemorySize=268435456 -XX:G1HeapRegionSize=4m -XX:InitiatingHeapOccupancyPercent=30 -XX:G1ReservePercent=15 -Des.distribution.type=docker --module-path /usr/share/elasticsearch/lib --add-modules=jdk.net -m org.elasticsearch.server/org.elasticsearch.bootstrap.Elasticsearch

Please share your docker config. Wether that is a docker file or if you are starting the container with a file and a bunch of config options being passed in.

Ok so to launch the different docker containers i tap a docker compose up command with a docker-compose.yml that looks like this:

version: '3.7'

services:

  # The 'setup' service runs a one-off script which initializes users inside
  # Elasticsearch — such as 'logstash_internal' and 'kibana_system' — with the
  # values of the passwords defined in the '.env' file.
  #
  # This task is only performed during the *initial* startup of the stack. On all
  # subsequent runs, the service simply returns immediately, without performing
  # any modification to existing users.
  setup:
    build:
      context: setup/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    init: true
    volumes:
      - ./setup/entrypoint.sh:/entrypoint.sh:ro,Z
      - ./setup/lib.sh:/lib.sh:ro,Z
      - ./setup/roles:/roles:ro,Z
      - setup:/state:Z
    environment:
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
      METRICBEAT_INTERNAL_PASSWORD: ${METRICBEAT_INTERNAL_PASSWORD:-}
      FILEBEAT_INTERNAL_PASSWORD: ${FILEBEAT_INTERNAL_PASSWORD:-}
      HEARTBEAT_INTERNAL_PASSWORD: ${HEARTBEAT_INTERNAL_PASSWORD:-}
      MONITORING_INTERNAL_PASSWORD: ${MONITORING_INTERNAL_PASSWORD:-}
      BEATS_SYSTEM_PASSWORD: ${BEATS_SYSTEM_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch

  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,Z
      - elasticsearch:/usr/share/elasticsearch/data:Z
    ports:
      - 9200:9200
      - 9300:9300
    environment:
      node.name: elasticsearch
      ES_JAVA_OPTS: -Xms512m -Xmx512m
      # Bootstrap password.
      # Used to initialize the keystore during the initial startup of
      # Elasticsearch. Ignored on subsequent runs.
      ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-}
      # Use single node discovery in order to disable production mode and avoid bootstrap checks.
      # see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
      discovery.type: single-node
    networks:
      - elk
    restart: unless-stopped

  logstash:
    build:
      context: logstash/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z
    ports:
      - 5044:5044
      - 50000:50000/tcp
      - 50000:50000/udp
      - 9600:9600
    environment:
      LS_JAVA_OPTS: -Xms256m -Xmx256m
      LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch
    restart: unless-stopped

  kibana:
    build:
      context: kibana/
      args:
        ELASTIC_VERSION: ${ELASTIC_VERSION}
    volumes:
      - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z
    ports:
      - 5601:5601
    environment:
      KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-}
    networks:
      - elk
    depends_on:
      - elasticsearch
    restart: unless-stopped

networks:
  elk:
    driver: bridge

volumes:
  setup:
  elasticsearch:

I think that a specific docker container docker-elk-setup is used to set up the ELK stack.
This container has a Dockerfile that is this one:

ARG ELASTIC_VERSION

# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}

USER root

RUN set -eux; \
	mkdir /state; \
	chmod 0775 /state; \
	chown elasticsearch:root /state

USER elasticsearch:root

ENTRYPOINT ["/entrypoint.sh"]

And the elasticsearch docker container has a config file (in docker-elk/elasticsearch/config/elasticsearch.yml) which look like this:

---
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/main/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: docker-cluster
network.host: 0.0.0.0

## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true

xpack.security.authc.api_key.enabled: true

And a Dockerfile which look like this:

ARG ELASTIC_VERSION

# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}

# Add your elasticsearch plugins setup here
# Example: RUN elasticsearch-plugin install analysis-icu

Ok maybe i see what setting to change in order to increase the heap size.
In the docker compose file that i use to launch the docker containers:
There is a setting:

environment:
      node.name: elasticsearch
      ES_JAVA_OPTS: -Xms512m -Xmx512m

Maybe i need to change -Xmx512m value to -Xmx512m to -Xmx2g -Xms2g like they said in this elastic article Advanced configuration | Elasticsearch Guide [8.6] | Elastic.
----- Post Update ----
Yes Indeed it increase the heap size see the seetings extract from http://localhost:9200/_cluster/stats?pretty&human request

 "jvm" : {
      "max_uptime" : "23.1m",
      "max_uptime_in_millis" : 1387456,
      "versions" : [
        {
          "version" : "19.0.2",
          "vm_name" : "OpenJDK 64-Bit Server VM",
          "vm_version" : "19.0.2+7-44",
          "vm_vendor" : "Oracle Corporation",
          "bundled_jdk" : true,
          "using_bundled_jdk" : true,
          "count" : 1
        }
      ],
      "mem" : {
        "heap_used" : "910.3mb",
        "heap_used_in_bytes" : 954597504,
        "heap_max" : "2gb",
        "heap_max_in_bytes" : 2147483648
      },
      "threads" : 137
    },

Thank you very much @warkolm !! :fire:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.