Issue in Bulk indexing with Elasticsearch Python Client

Hi,

I am continously getting exception while doing bulk index. Number of documents are 800 only. Even trying a timeout of 300 does not help. After insertion of around 400 docs below errors grt raised. This exception has been raised by other people too. kindly look into it.

Traceback (most recent call last):
File "D:/projects/test/exp_mapping_changer.py", line 94, in
change_exp_structure()
File "D:/projects/test/exp_mapping_changer.py", line 64, in change_exp_structure
print helpers.bulk(prod_es, to_insert)
File "C:\Users\acer\dev-env\lib\site-packages\elasticsearch\helpers_init_.py", line 182, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File "C:\Users\acer\dev-env\lib\site-packages\elasticsearch\helpers_init_.py", line 124, in streaming_bulk
raise e
elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=host, port=port): Read timed out. (read timeout=10))

Regards
Bharvi

how many nodes you have in your cluster? when you bulk index, did you check the cluster health status? did you also check into the es nodes log directory, for exception/error/ in the log or slow index?

Hi,
I do have 2 data nodes, one master node and one client node in my cluster.
There are no issues i found inside logs.
Cluster health is always green. Though i have not enabled the slow logs. I
do have 8 shards in my index.

*Surprisingly, when i index the data in bulk using java client, it works
fine. *

One other condition is when when i am creating only one shard for the
index, i am able to index the data through python client without any error.

Regards

Bharvi Dixit
Software Engineer
596, Udyog Vihar Phase V, Sector 19, Gurgaon 122016, India
Tel: +91 (124) 438 4534 Web: www.grownout.com