What's your heap size? What version are you on? What sort of data is it? How many shards+indices and how big are they? How are you sending it to Elasticsearch?
heap size: 1g
es version : 7.1.1
the data is basically an LDIF file which i convert to json and it has large number of fields probably hundred.(6GB file)
index shard prirep state docs store ip node
.kibana_1 0 r STARTED 8 49kb 178.90.91.90 es-node-3
.kibana_1 0 p STARTED 8 49kb 178.90.91.146 es-node-2
.kibana_task_manager 0 p STARTED 2 12.8kb 178.90.91.146 es-node-2
.kibana_task_manager 0 r STARTED 2 12.8kb 178.90.91.223 es-node-1
ldap-listener-stat 2 p STARTED 5290 1.2mb 178.90.91.90 es-node-3
ldap-listener-stat 2 r STARTED 5290 1.3mb 178.90.91.146 es-node-2
ldap-listener-stat 1 p STARTED 5217 1.2mb 178.90.91.146 es-node-2
ldap-listener-stat 1 r STARTED 5217 1.3mb 178.90.91.223 es-node-1
ldap-listener-stat 0 p STARTED 5155 1.2mb 178.90.91.90 es-node-3
ldap-listener-stat 0 r STARTED 5155 1.2mb 178.90.91.223 es-node-1
i have no other indices, the index that i was trying to bulk index into has been deleted.
I have a python script which takes LDIF file converts to json and index using Parallel_bulk.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.