Just to give more context, We were trying to do load test on es cluster (with 150 data nodes) deployed on AWS EC2 instances. We were not able to get the desired ingestion throughput despite making all the recommended settings for better indexing performance.
Then we saw that the problem was with the network bandwidth. The AWS i3.2x large instances come with Up to 10 Gigabit/second network bandwidth and this network was clogged.
So we were not able to ingest the data at a better throughput. So, I was thinking we could improve the throughput by compressing the bulk payload. I could not find any documentation on how to compress the payload using java high level rest client . Any inputs would be really helpful.
I believe that the only effect of this line is to indicate to Elasticsearch that the client will accept a compressed response. If you want the client to compress the requests too you should call RestClientBuilder#setCompressionEnabled instead.
Note that compression takes substantial extra CPU effort. An i3.2xlarge only has 8 CPUs, I think you'd need several times that number to reach 10Gbps of throughput.
is available only from elasticsearch-rest-client version 7.10.1.
But we are running elasticsearch server version 7.8.0.
Is it possible to use rest client version 7.10.1 to compress bulk request and ingest into elasticsearch server 7.8.0?
Ah yes, it was introduced in 7.10.0 (see below). I don't know another way to do it with the high-level REST client, although of course it's pretty simple to use a bare HTTP client to send bulk requests with compression.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.