I have a ruby program that ingests data into ES (using bulk)
I am seeing periodic failures on bulk uploads which raise exception <Elastic::Transport::Transport::Error: Net::ReadTimeout with #<TCPSocket:(closed)>>
These come in cluster so I assume they are trigged by load on the clusters.
We recently upgraded to version 8 which does not have a separate elasticsearch-transport gem. I had code that handled these but now there are no exception constants for Elastic::Transport.
How can I trap these errors?
current code:
index_params= { index: index_n, body: batch } # timeout: 10,K body: batch }
begin
r = @conn.perform_api_request(:bulk, index_params, true )
# rescue Elasticsearch::Transport::Transport::Errors => e
# $logger.warn "bulk error #{e.message}"
# @transport_failure = true
rescue => e
$logger.warn "bulk error #{e.inspect}" # .message}"
if e.message == 'execution expired' or e.message.match(/^Failed to open/) # timeout
@transport_failure = true
end
end
with the pre version 8 code commented.
now outputs
[2025-11-26T10:50:32] WARN : bulk error #<Elastic::Transport::Transport::Error: Net::ReadTimeout with #<TCPSocket:(closed)>>
[2025-11-26T10:50:42] WARN : bulk error #<Elastic::Transport::Transport::Error: Net::ReadTimeout with #<TCPSocket:(closed)>>
Also and ideas about what is causing the problem at the ES end
Grasping at straws I have reduced the batch size to 1000 from 5000. docs are small - order of 1KB with less than a dozen fields