Elasticsearch::Transport Cannot get new connection from pool

Hello everyone
I now deployed in the AWS EC2 td-agent, used to collect logs send to elasticsearch, found every about 6 hours, the log can not be normal collection, the newspaper is as follows:
#################################################
2015-11-11 18:06:37 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:06:38 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:179:in perform_request' 2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/faraday.rb:20:inperform_request'
2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:in perform_request' 2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:inbulk'
2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/etc/plugin/out_elasticsearch.rb:168:in send' 2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/etc/plugin/out_elasticsearch.rb:161:inwrite'
2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:325:in write_chunk' 2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:304:inpop'
2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/output.rb:321:in try_flush' 2015-11-11 18:06:37 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/output.rb:140:inrun'
2015-11-11 18:06:38 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:06:40 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
2015-11-11 18:06:38 +0800 [warn]: suppressed same stacktrace
2015-11-11 18:06:40 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:06:44 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
2015-11-11 18:06:40 +0800 [warn]: suppressed same stacktrace
2015-11-11 18:06:44 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:06:52 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
2015-11-11 18:06:44 +0800 [warn]: suppressed same stacktrace
2015-11-11 18:06:52 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:07:06 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
2015-11-11 18:06:52 +0800 [warn]: suppressed same stacktrace
2015-11-11 18:07:06 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-11 18:07:37 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:124678c"
#################################################

My configuration is as follows:
#################################################
type elasticsearch
host 10.0.0.52
port 9200
path /
logstash_format true
utc_index false
flush_interval 3s
include_tag_key true
tag_key tag
buffer_type memory
buffer_chunk_limit 256m
buffer_queue_limit 128
disable_retry_limit false
retry_limit 17
retry_wait 1s
num_threads 5
#################################################

Help me see what is the cause?

What do the logs for one of the nodes show?

This is an error in the daily newspaper, no special error log when the fault occurs.

[2015-11-12 03:48:45,399][DEBUG][action.bulk ] [10.64.1.218] [ngx_log_i18n_order-2015.11.12][1] failed to execute bulk item (index) index {[ngx_log_i18n_order-2
015.11.12][fluentd][AVD4F97T9BcHxEWJ1zx5], source[{"client_ip":"2a02:120b:2c28:9b00:b901:9bc1:23c0:e5b1","domain":"m.buy.xx.com","method":"GET","url":"/in/misc/getstarstock/
?hd_id=miband&_=1447271320698&jsonpcallback=jsonp1","http_code":"200","http_length":"972","referer":"http://mobile.xx.com/in/miband/","ua":"Mozilla/5.0 (Linux; Android 5.0.2
; LG-D802 Build/LRX22G) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.76 Mobile Safari/537.36","proxy_ip":"10.64.10.244","request_time":"0.036","response_time":"0.
036","custom_status":"0","userid":"","logid":"682324803920","ua_name":"Chrome","ua_category":"smartphone","ua_os":"Android","ua_version":"46.0.2490.76","ua_vendor":"Google",
"hostname":"sgpaws-b2corder-web12.mias","hostip":"10.64.11.167","tag":"ngx_log_kafka","@timestamp":"2015-11-12T03:48:44+08:00"}]}
MapperParsingException[failed to parse [client_ip]]; nested: IllegalArgumentException[failed to parse ip [2a02:120b:2c28:9b00:b901:9bc1:23c0:e5b1], not a valid ipv4 address
(4 dots)];
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:339)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:314)
at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:441)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:267)
at org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:128)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:318)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:503)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:494)
at org.elasticsearch.action.support.replication.TransportReplicationAction.prepareIndexOperationOnPrimary(TransportReplicationAction.java:1052)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1060)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:338)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: failed to parse ip [2a02:120b:2c28:9b00:b901:9bc1:23c0:e5b1], not a valid ipv4 address (4 dots)
at org.elasticsearch.index.mapper.ip.IpFieldMapper.ipToLong(IpFieldMapper.java:82)
at org.elasticsearch.index.mapper.ip.IpFieldMapper.innerParseCreateField(IpFieldMapper.java:271)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:213)
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:331)
... 18 more

[2015-11-12 11:14:25,544][DEBUG][action.bulk ] [10.64.1.218] [ngx_log_i18n_order-2015.11.12][1] failed to execute bulk item (index) index {[ngx_log_i18n_order-2
015.11.12][fluentd][AVD5r-SY9BcHxEWJ3MOs], source[{"client_ip":"unknown","domain":"buy.xx.com","method":"GET","url":"/hk/event/selectPacket/goodsId/4154500008","http_code":"
200","http_length":"44370","referer":"-","ua":"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36","proxy_ip":"10.64.10.23
0","request_time":"0.008","response_time":"0.008","custom_status":"0","userid":"165830530","logid":"953064857145","ua_name":"Chrome","ua_category":"pc","ua_os":"Windows XP",
"ua_version":"46.0.2490.80","ua_vendor":"Google","hostname":"sgpaws-b2c-order-web04.mias","hostip":"10.64.11.163","tag":"ngx_log_kafka","@timestamp":"2015-11-12T11:14
:24+08:00"}]}
MapperParsingException[failed to parse [client_ip]]; nested: IllegalArgumentException[failed to parse ip [unknown], not a valid ip address];
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:339)
at org.elasticsearch.index.mapper.DocumentParser.parseObjectOrField(DocumentParser.java:314)
at org.elasticsearch.index.mapper.DocumentParser.parseValue(DocumentParser.java:441)
at org.elasticsearch.index.mapper.DocumentParser.parseObject(DocumentParser.java:267)
at org.elasticsearch.index.mapper.DocumentParser.innerParseDocument(DocumentParser.java:128)
at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:79)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:318)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:503)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:494)
at org.elasticsearch.action.support.replication.TransportReplicationAction.prepareIndexOperationOnPrimary(TransportReplicationAction.java:1052)
at org.elasticsearch.action.support.replication.TransportReplicationAction.executeIndexRequestOnPrimary(TransportReplicationAction.java:1060)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:338)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:131)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.performOnPrimary(TransportReplicationAction.java:579)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase$1.doRun(TransportReplicationAction.java:452)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: failed to parse ip [unknown], not a valid ip address
at org.elasticsearch.index.mapper.ip.IpFieldMapper.ipToLong(IpFieldMapper.java:78)
at org.elasticsearch.index.mapper.ip.IpFieldMapper.innerParseCreateField(IpFieldMapper.java:271)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:213)
at org.elasticsearch.index.mapper.FieldMapper.parse(FieldMapper.java:331)
... 18 more

Ip type does not support IPV6.

Thank you, I know this meaning, because IPv6 does not support the newspaper is wrong, you can lead me to start to say that the error? Data cannot be written to ES

It is impossible because the IPv6 does not support, and so that the client can not write data to the ES cluster, right?

Sorry. I just answered to what the DEBUG log means here. It means that in your mapping you defined a field as ip but ip supports only IPV4: https://www.elastic.co/guide/en/elasticsearch/reference/current/ip.html

I have no idea about the error messages you have on the client. (I even don't know what kind of client you are using)

MY elasticsearch version 2.0.0

my client is fluentd(td-agent)

error log:
2015-11-09 01:59:02 +0800 [info]: following tail of /home/work/logs/nginx/sgp.shopapi.b2c.srv.log
2015-11-09 01:59:02 +0800 [info]: detected rotation of /home/work/logs/nginx/sgp.shopapi.notify.b2c.srv.log; waiting 5 seconds
2015-11-09 01:59:02 +0800 [info]: following tail of /home/work/logs/nginx/sgp.shopapi.notify.b2c.srv.log
2015-11-09 02:00:04 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:00:05 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:179:in perform_request' 2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/faraday.rb:20:inperform_request'
2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:in perform_request' 2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:inbulk'
2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/etc/plugin/out_elasticsearch.rb:168:in send' 2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/etc/plugin/out_elasticsearch.rb:161:inwrite'
2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:325:in write_chunk' 2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:304:inpop'
2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/output.rb:321:in try_flush' 2015-11-09 02:00:04 +0800 [warn]: /home/work/app/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/output.rb:140:inrun'
2015-11-09 02:00:05 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:00:07 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:05 +0800 [warn]: suppressed same stacktrace
2015-11-09 02:00:07 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:00:11 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:07 +0800 [warn]: suppressed same stacktrace
2015-11-09 02:00:11 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:00:19 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:11 +0800 [warn]: suppressed same stacktrace
2015-11-09 02:00:19 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:00:36 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:19 +0800 [warn]: suppressed same stacktrace
2015-11-09 02:00:36 +0800 [warn]: temporarily failed to flush the buffer. next_retry=2015-11-09 02:01:11 +0800 error_class="Elasticsearch::Transport::Transport::Error" error="Cannot get new connection from pool." plugin_id="object:150df38"
2015-11-09 02:00:36 +0800 [warn]: suppressed same stacktrace

When "Cannot get new connection from pool" error happen with elasticsearch-ruby?
fluent-plugin-elasticsearch code is here: https://github.com/uken/fluent-plugin-elasticsearch/blob/e44f323c99f3a5aca29b569023b846c1111f87da/lib/fluent/plugin/out_elasticsearch.rb#L52

Is this code wrong?
I want to know this error is fluent-plugin-elasticsearch problem or not.

Please start your own thread.

Okay. I just opened new topic: Elasitcsearch-ruby raises "Cannot get new connection from pool" error