Visualize: regected execution of or.elasticsearch.common.util.concurrent.TimedRunable


(Sean) #1

I'm running a single node deployment of elasticsearch and kibana. Elasticsearch.yml is basically default with the exception of these two lines
xpack.security.enabled: false xpack.monitoring.collection.enabled: true
Elasticsearch only has about 50 gb of data. When it's loading the dashboard which is time filtered to be about 5-10 gb of data, it has been failing a lot recently. I have tried increasing the queue size, but I still get the following error:

Visualize: rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@230cbd7b on QueueResizingEsThreadPoolExecutor[name = Tkb3qYF/search, queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 70ms, adjustment amount = 50, org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@72046c3f[Running, pool size = 7, active threads = 7, queued tasks = 1056, completed tasks = 10452]] rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@743b1765 on QueueResizingEsThreadPoolExecutor[name = Tkb3qYF/search, queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 70ms, adjustment amount = 50, org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@72046c3f[Running, pool size = 7, active threads = 7, queued tasks = 1057, completed tasks = 10452]] rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@5ef9168e on QueueResizingEsThreadPoolExecutor[name = Tkb3qYF/search, queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 70ms, adjustment amount = 50, org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@72046c3f[Running, pool size = 7, active threads = 7, queued tasks = 1058, completed tasks = 10452]] rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@6788f0f1 on QueueResizingEsThreadPoolExecutor[name = Tkb3qYF/search, queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 70ms, adjustment amount = 50, org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@72046c3f[Running, pool size = 7, active threads = 7, queued tasks = 1059, completed tasks = 10452]] rejected execution of org.elasticsearch.common.util.concurrent.TimedRunnable@21a97e1e on QueueResizingEsThreadPoolExecutor[name = Tkb3qYF/search, queue capacity = 1000, min queue capacity = 1000, max queue capacity = 1000, frame size = 2000, targeted response rate = 1s, task execution EWMA = 70ms, adjustment amount = 50, org.elasticsearch.common.util.concurrent.QueueResizingEsThreadPoolExecutor@72046c3f[Running, pool size = 7, active threads = 7, queued tasks = 1060, completed tasks = 10452]]`


(Sean) #2

In addition I am getting shard failures at the same time. Here are my cluster stats:

{
"_nodes": {
"total": 1,
"successful": 1,
"failed": 0
},
"cluster_name": "elasticsearch",
"timestamp": 1540838856206,
"status": "green",
"indices": {
"count": 458,
"shards": {
"total": 546,
"primaries": 546,
"replication": 0,
"index": {
"shards": {
"min": 1,
"max": 5,
"avg": 1.1921397379912664
},
"primaries": {
"min": 1,
"max": 5,
"avg": 1.1921397379912664
},
"replication": {
"min": 0,
"max": 0,
"avg": 0
}
}
},
"docs": {
"count": 184981345,
"deleted": 41677
},
"store": {
"size_in_bytes": 49348746297
},
"fielddata": {
"memory_size_in_bytes": 45640,
"evictions": 0
},
"query_cache": {
"memory_size_in_bytes": 2319186,
"total_count": 293892,
"hit_count": 2398,
"miss_count": 291494,
"cache_size": 255,
"cache_count": 259,
"evictions": 4
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 4152,
"memory_in_bytes": 152090498,
"terms_memory_in_bytes": 104071725,
"stored_fields_memory_in_bytes": 28113680,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 5968128,
"points_memory_in_bytes": 9595669,
"doc_values_memory_in_bytes": 4341296,
"index_writer_memory_in_bytes": 68310418,
"version_map_memory_in_bytes": 164591,
"fixed_bit_set_memory_in_bytes": 377832,
"max_unsafe_auto_id_timestamp": 1540837861868,
"file_sizes": {}
}
},
"nodes": {
"count": {
"total": 1,
"data": 1,
"coordinating_only": 0,
"master": 1,
"ingest": 1
},
"versions": [
"6.3.0"
],
"os": {
"available_processors": 4,
"allocated_processors": 4,
"names": [
{
"name": "Windows Server 2012 R2",
"count": 1
}
],
"mem": {
"total_in_bytes": 17179398144,
"free_in_bytes": 3180969984,
"used_in_bytes": 13998428160,
"free_percent": 19,
"used_percent": 81
}
},
"process": {
"cpu": {
"percent": 20
},
"open_file_descriptors": {
"min": -1,
"max": -1,
"avg": 0
}
},
"jvm": {
"max_uptime_in_millis": 1053510,
"versions": [
{
"version": "1.8.0_162",
"vm_name": "Java HotSpot(TM) 64-Bit Server VM",
"vm_version": "25.162-b12",
"vm_vendor": "Oracle Corporation",
"count": 1
}
],
"mem": {
"heap_used_in_bytes": 1960242752,
"heap_max_in_bytes": 4260102144
},
"threads": 74
},
"fs": {
"total_in_bytes": 214745214976,
"free_in_bytes": 24940769280,
"available_in_bytes": 24940769280
},
"plugins": [],
"network_types": {
"transport_types": {
"netty4": 1
},
"http_types": {
"netty4": 1
}
}
}
}

any help is appreciated


(Christian Dahlqvist) #3

That is a lot of shards given the data volume and node size. having lots of small indices and shards can be very inefficient and lead bot performance problems, as described in this blog post on shards and sharding practices. Dramatically increasing the queues will not solve the problem, but possibly make it worse. I would recommend reducing the number of shards dramatically, which should give you less problems as well as better performance.


(Sean) #4

Thank you. I will give that a try


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.