Enable compression while indexing data using Hive/Elastic connector

Hi Team,

We are using Hive/Elasticsearch(connector) for batch indexing data to ES, Can some one pls share the configs in the indexer side to enable compression(gzip/deflate) for the data in transit(bulk write) in order to reduce the network consumption during indexing time.

I am able to test enabling decompress data in the ES side.

Thanks,
Sachin

Unfortunately we do not support compressing data in ES-Hadoop at the moment due to restrictions with the client libraries that are used: https://github.com/elastic/elasticsearch-hadoop/issues/804.

Hopefully in the future though!

1 Like

Thank you for the reply james.baiera, Could you please suggest any alternate options like tweaking Spark ES indexing code or any other alternate approach ?