Internal settings for data limit/transfer between data tier

would like to share my case

" what's going on? I'm shooting the date to node which have 16GB of mem limit so we have 8GB of heap size so what I got circuit_breaking_exception for 3.9 GB???
Yes because this date was to old
and these data were caught in another data tier (warm)where nodes have less memory. It seems that the data policy works from the moment of loading. The data is immediately redirected there and in the logstash I get just such output

:error=>{"type"=>"circuit_breaking_exception", "reason"=>"[parent] Data too large, data for [indices:data/write/bulk[s]] would be [4240251744/3.9gb], which is larger than the limit of [4080218931/3.7gb], real usage: [4218319280/3.9gb], new bytes reserved: [21932464/20.9mb], usages [fielddata=27403199/26.1mb, request=376832/368kb, inflight_requests=109708934/104.6mb, model_inference=0/0b, eql_sequence=0/0b]", "bytes_wanted"=>4240251744, "bytes_limit"=>4080218931, "durability"=>"TRANSIENT"}}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.