We are using Hive/Elasticsearch(connector) for batch indexing data to ES, Can some one pls share the configs in the indexer side to enable compression(gzip/deflate) for the data in transit(bulk write) in order to reduce the network consumption during indexing time.
I am able to test enabling decompress data in the ES side.
Thank you for the reply james.baiera, Could you please suggest any alternate options like tweaking Spark ES indexing code or any other alternate approach ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.