An existing connection was forcibly closed by the remote host

Exception in thread "main" java.io.IOException: 远程主机强迫关闭了一个现有的连接。
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:964)
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:928)
at com.wuzhou.utils.EsUtil4Scala$.addOneDoc(EsUtil4Scala.scala:77)
at com.wuzhou.calculateTest$.calculate(calculateTest.scala:132)
at com.wuzhou.calculateTest$.main(calculateTest.scala:24)
at com.wuzhou.calculateTest.main(calculateTest.scala)
Caused by: java.io.IOException: 远程主机强迫关闭了一个现有的连接。
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.http.impl.nio.codecs.AbstractContentEncoder.doWriteChunk(AbstractCo

For those of us that don't read the language in which the exception message is written, it apparently translates to "The remote host forcibly closed an existing connection" as per the title.

What's your question @279zlj?

1 Like

Welcome to our community! :smiley:

To reiterate David's comments, posting a log entry with no extra information makes it very difficult to understand what you are looking for.

If English is not your native language, then we do have other language-specific categories here - https://discuss.elastic.co/c/in-your-native-tongue/11

An error occurred when writing large amounts of data to elasticsearch: the remote host forcibly closed an existing connection. The error will not appear if the number of writes is reduced. Why?

An error occurred when writing large amounts of data to elasticsearch: the remote host forcibly closed an existing connection. The error will not appear if the number of writes is reduced. Why?

What version of Elasticsearch are you using?
How do you know there are large amounts of writes happening? What is large?
How are the writes being sent to Elasticsearch?
What is the output from the _cluster/stats?pretty&human API?

version:6.8.4
Use spark to take out 120,000 pieces of data, splicing 120,000 pieces of data into a string, and write the error with es-hadoop. Later, test 1500W pieces of data without using splicing to write them one by one, but the last one or two million cannot be written.
_cluster/stats?pretty&humanoutput:
{
"_nodes" : {
"total" : 1,
"successful" : 1,
"failed" : 0
},
"cluster_name" : "elasticsearch",
"cluster_uuid" : "QTfP-vDSSzeqrNzE-pOC5Q",
"timestamp" : 1611123928272,
"status" : "green",
"indices" : {
"count" : 14,
"shards" : {
"total" : 26,
"primaries" : 26,
"replication" : 0.0,
"index" : {
"shards" : {
"min" : 1,
"max" : 5,
"avg" : 1.8571428571428572
},
"primaries" : {
"min" : 1,
"max" : 5,
"avg" : 1.8571428571428572
},
"replication" : {
"min" : 0.0,
"max" : 0.0,
"avg" : 0.0
}
}
},
"docs" : {
"count" : 12784432,
"deleted" : 23674
},
"store" : {
"size" : "2.1gb",
"size_in_bytes" : 2350123118
},
"fielddata" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0
},
"query_cache" : {
"memory_size" : "516.9kb",
"memory_size_in_bytes" : 529319,
"total_count" : 386,
"hit_count" : 0,
"miss_count" : 386,
"cache_size" : 3,
"cache_count" : 3,
"evictions" : 0
},
"completion" : {
"size" : "0b",
"size_in_bytes" : 0
},
"segments" : {
"count" : 76,
"memory" : "9.9mb",
"memory_in_bytes" : 10423911,
"terms_memory" : "7.8mb",
"terms_memory_in_bytes" : 8190039,
"stored_fields_memory" : "733.5kb",
"stored_fields_memory_in_bytes" : 751152,
"term_vectors_memory" : "0b",
"term_vectors_memory_in_bytes" : 0,
"norms_memory" : "3.6kb",
"norms_memory_in_bytes" : 3712,
"points_memory" : "757.4kb",
"points_memory_in_bytes" : 775648,
"doc_values_memory" : "686.8kb",
"doc_values_memory_in_bytes" : 703360,
"index_writer_memory" : "0b",
"index_writer_memory_in_bytes" : 0,
"version_map_memory" : "0b",
"version_map_memory_in_bytes" : 0,
"fixed_bit_set" : "136.2kb",
"fixed_bit_set_memory_in_bytes" : 139512,
"max_unsafe_auto_id_timestamp" : 1610603538037,
"file_sizes" : { }
}
},
"nodes" : {
"count" : {
"total" : 1,
"data" : 1,
"coordinating_only" : 0,
"master" : 1,
"ingest" : 1
},
"versions" : [
"6.8.4"
],
"os" : {
"available_processors" : 16,
"allocated_processors" : 16,
"names" : [
{
"name" : "Linux",
"count" : 1
}
],
"pretty_names" : [
{
"pretty_name" : "Ubuntu 18.04.5 LTS",
"count" : 1
}
],
"mem" : {
"total" : "47.1gb",
"total_in_bytes" : 50639941632,
"free" : "9.2gb",
"free_in_bytes" : 9881591808,
"used" : "37.9gb",
"used_in_bytes" : 40758349824,
"free_percent" : 20,
"used_percent" : 80
}
},
"process" : {
"cpu" : {
"percent" : 0
},
"open_file_descriptors" : {
"min" : 495,
"max" : 495,
"avg" : 495
}
},
"jvm" : {
"max_uptime" : "5.2d",
"max_uptime_in_millis" : 449839381,
"versions" : [
{
"version" : "1.8.0_144",
"vm_name" : "Java HotSpot(TM) 64-Bit Server VM",
"vm_version" : "25.144-b01",
"vm_vendor" : "Oracle Corporation",
"count" : 1
}
],
"mem" : {
"heap_used" : "3.6gb",
"heap_used_in_bytes" : 3960643232,
"heap_max" : "7.8gb",
"heap_max_in_bytes" : 8476557312
},
"threads" : 178
},
"fs" : {
"total" : "146.1gb",
"total_in_bytes" : 156931440640,
"free" : "92.2gb",
"free_in_bytes" : 99064602624,
"available" : "84.7gb",
"available_in_bytes" : 91021606912
},
"plugins" : ,
"network_types" : {
"transport_types" : {
"security4" : 1
},
"http_types" : {
"security4" : 1
}
}
}
}

Are you using the bulk API?

I use two es-spark config on context init:
es.batch.size.bytes: 15mb
es.batch.size.entries: 1000

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.