How many shards? if your shards are too small in number, their size it's too big. Typical shards bigger than 10gb gives you bad performance both in writing and in reading due to segment operations.
I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core cpu, centos 6
Heap Memory is - 9000m
1 Master (Non data)
1 Capable master (Non data)
5 Data node
Having 10 indexes, one index is big with 55 million documents of number and 254Gi (508Gi)
size on disk.
Every 1 seconds there are 5-10 new documents indexing.
But problem is search is bit slow. Almost taking average of 2000 to 5000 ms. Some queries are in 1 secs.
How many shards? if your shards are too small in number, their size it's
too big. Typical shards bigger than 10gb gives you bad performance both in
writing and in reading due to segment operations.
I have 5 nodes, 10 shards and search shards having 45gb of data.
On Thursday, October 30, 2014 6:25:12 PM UTC+5:30, Alberto Paro wrote:
How many shards? if your shards are too small in number, their size it's
too big. Typical shards bigger than 10gb gives you bad performance both in
writing and in reading due to segment operations.
hi,
Alberto
Sent from my iPhone
On 29/ott/2014, at 12:02, Appasaheb Sawant <appasahe...@gmail.com
<javascript:>> wrote:
I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core
cpu, centos 6
Heap Memory is - 9000m
1 Master (Non data)
1 Capable master (Non data)
5 Data node
Having 10 indexes, one index is big with 55 million documents of number
and 254Gi (508Gi)
size on disk.
Every 1 seconds there are 5-10 new documents indexing.
But problem is search is bit slow. Almost taking average of 2000 to 5000
ms. Some queries are in 1 secs.
We have 5 nodes, 10 shards, 1 replace and each shard having 28GB of size.
Thanks.
On Thursday, October 30, 2014 6:25:12 PM UTC+5:30, Alberto Paro wrote:
How many shards? if your shards are too small in number, their size it's
too big. Typical shards bigger than 10gb gives you bad performance both in
writing and in reading due to segment operations.
hi,
Alberto
Sent from my iPhone
On 29/ott/2014, at 12:02, Appasaheb Sawant <appasahe...@gmail.com
<javascript:>> wrote:
I have 7 node of cluster. Each having configuration like 16G RAM, 8 Core
cpu, centos 6
Heap Memory is - 9000m
1 Master (Non data)
1 Capable master (Non data)
5 Data node
Having 10 indexes, one index is big with 55 million documents of number
and 254Gi (508Gi)
size on disk.
Every 1 seconds there are 5-10 new documents indexing.
But problem is search is bit slow. Almost taking average of 2000 to 5000
ms. Some queries are in 1 secs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.