Received response for a request that has timed out, sent [213025ms] ago, timed out [140336ms] ago

I am trying to insert bulk data from multiple remote relational databases into Elasticsearch. It works perfectly when the number of databases is around 10-15. But if I try inserting queries from 50 or more databases, it starts throwing this above error.
I have modified the "elasticsearch.requestTimeout" parameter to 300000 in kibana.yml file, but still it doesn't help.
I am using Docker for seamless data transition.
My question is can elasticsearch handle so much data inside a single node? If not what is the workaround? Does it have some hardware restrictions too? My system has 16 GB RAM and around 800 GB space in HD Drive.
Each of the ES, Logstash and Kibana versions for me is 6.5.4

Can you share the whole error message that you're seeing?

How many shards do you have on this node? How many documents are you trying to index per second? How big are they?

elasticsearch | [2019-02-20T10:49:07,833][WARN ][o.e.t.TransportService ] [-3KalQ_] Received response for a request that has timed out, sent [213025ms] ago, timed out [140336ms] ago, action [cluster:monitor/nodes/stats[n]], node [{-3KalQ_}{-3KalQ_QQF6fwONJFNicmg}{C15hePOuTyuas2Yldek2uw}{172.18.0.2}{172.18.0.2:9300}{ml.machine_memory=2076532736, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}], id [48]
logstash | [2019-02-20T10:49:54,846][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
kibana | {"type":"log","@timestamp":"2019-02-20T10:50:22Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search => socket hang up"}
kibana | {"type":"log","@timestamp":"2019-02-20T10:50:26Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana/doc/kql-telemetry%3Akql-telemetry => socket hang up"}

These are the entire error logs. It also throws some error regarding the DB queries, but they are working fine for smaller number of database connections

This is normally an indication that the cluster is overloaded, which is why I also asked:

I have not customized the number of shards, hence I think it is 5 shards per index by default. In total I have 5 queries per database and one index for each query, so for say 60 databases, it will be 300 indexes in total. I do not exactly know how to count the number of shards, looking at the error logs.
The documents are not that big, each query affecting somewhere around 25-30 rows of a table, or maybe less.

GET _cluster/health will include this information.

Thank you :slight_smile:
But can you please suggest me some workaround, like if I need to use more ES nodes, separate the logstash config files or increase some timeout values etc?
I am kind of new and don't know much about ES scaling

Without knowing what the issue is, it's hard to know what the fix might be.

I have analysed my case and I think I need to reshard my indices. Is it possible to change the number of shards from the Logstash config file? I think I need to employ 1-2 shards per index instead of the default 5.

The number of shards in an index is fixed at the time the index is created, but if you're creating new indices daily/weekly/monthly then you can adjust the value in the corresponding index template and wait for the new index to be created.

To reduce the shard count in existing indices you can use the shrink API once there are no more documents for that index and you can mark it as read-only.

1 Like

I created a new index template.json as follows -
{
"index_patterns": ["_contents", "_types"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
and made logstash.conf output use this template by default. I also allocated more memory to my Docker container (4GB) and JVM heap (3GB). But still I am getting the following error -

kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","monitoring-ui"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search?size=1000&ignore_unavailable=true&filter_path=hits.hits._id => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-/_search?filter_path=hits.total%2Caggregations.jobTypes.buckets%2Caggregations.objectTypes.buckets%2Caggregations.layoutTypes.buckets%2Caggregations.statusTypes.buckets => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana/doc/kql-telemetry%3Akql-telemetry => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search?size=10000&ignore_unavailable=true&filter_path=hits.hits._source.canvas-workpad => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana/doc/config%3A6.5.4 => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","data"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search?ignore_unavailable=true&filter_path=aggregations.types.buckets => read ECONNRESET"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-
/esqueue/_search?version=true => connect ECONNREFUSED 172.19.0.2:9200"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.reporting-*/esqueue/_search?version=true => connect ECONNREFUSED 172.19.0.2:9200"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search => connect ECONNREFUSED 172.19.0.2:9200"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana/doc/kql-telemetry%3Akql-telemetry => connect ECONNREFUSED 172.19.0.2:9200"}
kibana | {"type":"log","@timestamp":"2019-02-28T16:05:11Z","tags":["error","elasticsearch","admin"],"pid":1,"message":"Request error, retrying\nPOST http://elasticsearch:9200/.kibana/_search?size=1000&ignore_unavailable=true&filter_path=hits.hits._id => connect ECONNREFUSED 172.19.0.2:9200"}

But this error only appears for large data (300+ indices) and not for smaller data volumes. Can anyone help me here?

The issue is resolved. I had to push the RAM allocated for the Docker container up to 8 GB and the JVM heap for Elasticsearch to 6 GB. I had no idea that it would be such memory-intensive. For running these types of huge queries, one needs high-performance systems with 32-64 GB RAMs I guess !!

1 Like

Note that the reference manual recommends setting the heap size to no more than 50% of the physical RAM; in a container I think this means the physical RAM allocated to the container. If you want to allocate 6GB of heap to Elasticsearch you should allocate 12GB to the container.

Thanks for this info :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.