After stuffing a large amount of data into a pre-production cluster made of
7 shards, replication 1 on 3 machines, I have a better understanding of how
my index will grow.
What I am still missing is how much memory vs disk space I'm supposed to
provision for acceptable performance.
After stuffing a large amount of data into a pre-production cluster made
of 7 shards, replication 1 on 3 machines, I have a better understanding of
how my index will grow.
What I am still missing is how much memory vs disk space I'm supposed to
provision for acceptable performance.
After stuffing a large amount of data into a pre-production cluster made
of 7 shards, replication 1 on 3 machines, I have a better understanding of
how my index will grow.
What I am still missing is how much memory vs disk space I'm supposed to
provision for acceptable performance.
After stuffing a large amount of data into a pre-production cluster
made
of 7 shards, replication 1 on 3 machines, I have a better
understanding of
how my index will grow.
What I am still missing is how much memory vs disk space I'm supposed
to
provision for acceptable performance.
Use node stats to see where memory is spent. Mainly look at field data
cache (sort and facet related) and the jvm memory used. This will give you
an indication if you are running low on memory. Use bigdesk plugin to
visualize it.
After stuffing a large amount of data into a pre-production cluster
made
of 7 shards, replication 1 on 3 machines, I have a better
understanding of
how my index will grow.
What I am still missing is how much memory vs disk space I'm supposed
to
provision for acceptable performance.
On Wed, Jan 18, 2012 at 10:24 PM, Shay Banon kimchy@gmail.com wrote:
Use node stats to see where memory is spent. Mainly look at field data
cache (sort and facet related) and the jvm memory used. This will give you
an indication if you are running low on memory. Use bigdesk plugin to
visualize it.
After stuffing a large amount of data into a pre-production cluster
made
of 7 shards, replication 1 on 3 machines, I have a better
understanding of
how my index will grow.
What I am still missing is how much memory vs disk space I'm
supposed to
provision for acceptable performance.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.