Thank you for your links.
I figured out that our shards size is very small:
GET _cat/shards/logs-2018.05.29
logs-2018.05.29 1 r STARTED 196755 176.9mb 192.168.2.161 db1
logs-2018.05.29 1 p STARTED 196755 176.7mb 192.168.2.162 db2
logs-2018.05.29 3 r STARTED 197144 176.9mb 192.168.2.161 db1
logs-2018.05.29 3 p STARTED 197144 176.6mb 192.168.2.162 db2
logs-2018.05.29 4 p STARTED 196554 176.2mb 192.168.2.161 db1
logs-2018.05.29 4 r STARTED 196554 177.4mb 192.168.2.162 db2
logs-2018.05.29 2 p STARTED 196979 177.8mb 192.168.2.161 db1
logs-2018.05.29 2 r STARTED 196979 176.7mb 192.168.2.162 db2
logs-2018.05.29 0 p STARTED 196328 176.1mb 192.168.2.161 db1
logs-2018.05.29 0 r STARTED 196328 176.5mb 192.168.2.162 db2
As I understand, it is common to have shard size around 30GB. Currently I have number_of_shards set to 5 and this is probably my problem. I will set this parameter to 2.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.