Transport error 429

Hi Team,

We are getting elasticsearch exceptions - transport error 429 while providing es.search command using python pandas for some large set of data(upto 13-15k records). It's showing the limit is more than the threshold. Can you provide a solution to fix this?

Thanks,
Susendiran

How many indices and shards are you actively indexing into?

What bulk size are you using?

How many concurrent indexing threads/processes do you have running?

Which version of Elasticsearch are you using?

What is the size and specification of your cluster?

Hi @Christian_Dahlqvist ,

We are trying to get some data from two active indices. Please find the screenshots for sizes,

Version of Elasticsearch - 7.5.1

Please find the cluster stats below,
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"cluster_uuid" : "z51G64fKROute-83Vaf3iw",
"timestamp" : 1678699420661,
"status" : "yellow",
"indices" : {
"count" : 665,
"shards" : {
"total" : 665,
"primaries" : 665,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"primaries" : {
"min" : 1,
"max" : 1,
"avg" : 1.0
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
},
"docs" : {
"count" : 22276210,
"deleted" : 8860566
},
"store" : {
"size_in_bytes" : 6115202289
},
"fielddata" : {
"memory_size_in_bytes" : 4099360,
"evictions" : 0
},
"query_cache" : {
"memory_size_in_bytes" : 64667297,
"total_count" : 34815686,
"hit_count" : 11109303,
"miss_count" : 23706383,
"cache_size" : 20836,
"cache_count" : 1231388,
"evictions" : 1210552
},
"completion" : {
"size_in_bytes" : 0
},
"segments" : {
"count" : 2192,
"memory_in_bytes" : 27586819,
"terms_memory_in_bytes" : 19330907,
"stored_fields_memory_in_bytes" : 4451864,
"term_vectors_memory_in_bytes" : 0,
"norms_memory_in_bytes" : 1468800,
"points_memory_in_bytes" : 1098544,
"doc_values_memory_in_bytes" : 1236704,
"index_writer_memory_in_bytes" : 1560046,
"version_map_memory_in_bytes" : 160,
"fixed_bit_set_memory_in_bytes" : 1224,
"max_unsafe_auto_id_timestamp" : -1,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 1,
"coordinating_only" : 0,
"data" : 1,
"ingest" : 1,
"master" : 1,
"ml" : 1,
"voting_only" : 0
},
"versions" : [
"7.5.1"
],
"os" : {
"available_processors" : 8,
"allocated_processors" : 8,
"names" : [
{
"name" : "Linux",
"count" : 1
}
],
"pretty_names" : [
{
"pretty_name" : "RHEL",
"count" : 1
}
],
"mem" : {
"total_in_bytes" : 16656285696,
"free_in_bytes" : 1185083392,
"used_in_bytes" : 15471202304,
"free_percent" : 7,
"used_percent" : 93
}
},
"process" : {
"cpu" : {
"percent" : 9
},
"open_file_descriptors" : {
"min" : 3369,
"max" : 3369,
"avg" : 3369
}
},
"jvm" : {
"max_uptime_in_millis" : 39383272420,
"versions" : [
{
"version" : "13.0.1",
"vm_name" : "OpenJDK 64-Bit Server VM",
"vm_version" : "13.0.1+9",
"vm_vendor" : "AdoptOpenJDK",
"bundled_jdk" : true,
"using_bundled_jdk" : true,
"count" : 1
}
],
"mem" : {
"heap_used_in_bytes" : 785539904,
"heap_max_in_bytes" : 1037959168
},
"threads" : 218
},
"fs" : {
"total_in_bytes" : 56882102272,
"free_in_bytes" : 15076413440,
"available_in_bytes" : 15076413440
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"security4" : 1
},
"http_types" : {
"security4" : 1
}
},
"discovery_types" : {
"zen" : 1
},
"packaging_types" : [
{
"flavor" : "default",
"type" : "tar",
"count" : 1
}
]
}
}

OK. You have a single node cluster with just 1GB of heap assigned. It seems you are hitting the limit there so probably need to assign more RAM and heap (should be 50% of available RAM).

upgrading RAM I don't think it's possible from project perspective @Christian_Dahlqvist. Is it possible to clear anything that can free up or any other ways to call it or compressing the size etc.,?

A 1GB heap is very small for Elasticsearch, so there is not a lot to play with. It does look like you have a lot of very small shards, which is inefficient and can result in increased overhead. Given the size of your data I would expect a single shard to be able to handle that, so 665 shards is clearly excessive. I would recommend dramatically reducing the number of shards and see what impact thst has.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.