The problem with reoccurring exception:
temporarily failed to flush the buffer. next_retry=2015-12-12 23:56:36 +0000 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="output_to_elasticsearch"
can be triggered also by plugin setting "reload_connections" (default set to true) and occurs only for Amazon Elasticsearch Service
By default Elasticsearch Ruby client reloads connections after 10_000 requests. The client use endpoint _nodes from Elasticsearch REST API to obtain all available nodes.
Unfortunately Amazon Elasticsearch Service doesn't implement this function in the same way as the original Elasticsearch. Amazon doesn't add any information about addresses and after reload our connection pool to Amazon ES is empty and it is impossible to reload it at all.
Example Elasticsearch Output (contains address data):
{
"cluster_name": "elasticsearch",
"nodes": {
"daaweaw-DASDAD123131": {
"name": "Destiny",
"transport_address": "inet[/172.17.0.3:9300]",***
"host": "249885187c52",
"ip": "172.17.0.3",
"version": "1.7.4",
"build": "0d3159b",
"http_address": "inet[/172.17.0.3:9200]",
"http": {
"bound_address": "inet[/0:0:0:0:0:0:0:0:9200]",
"publish_address": "inet[/172.17.0.3:9200]",
"max_content_length_in_bytes": 104857600
}
}
}
}
Amazon Elasticsearch Service output (no address data):
{
"cluster_name": "123123123123:schema",
"nodes": {
"ABC": {
"build": "62ff986",
"version": "1.5.2",
"name": "Marduks"
},
"BCD": {
"build": "62ff986",
"version": "1.5.2",
"name": "Fern"
},
"EDF": {
"build": "62ff986",
"version": "1.5.2",
"name": "Emilly"
}
}
}
Code in Elasticsearch Ruby client which is the cause of this issue:
def reload_connections!
hosts = sniffer.hosts
__rebuild_connections :hosts => hosts, :options => options
self
rescue SnifferTimeoutError
logger.error "[SnifferTimeoutError] Timeout when reloading connections." if logger
self
end
sniffer.hosts
tries to obtain nodes data from _nodes endpoint.