Again I have an issue with the power of the cluster.
I have the cluster from 3 servers, each has 30RAM, 8 CPUs and 1Tb disk
There are 1323957069 docs (1.64TB) there, the documents distribution is the
All the 3 nodes are data nodes.
The index throughput is something about 10-20k documents per minute. (it's
the logstash -> elasticsearch setup, we store different logs in the cluster)
My concerns are the next:
- When I load the index page of kibana - the loading of the document types
panel takes about a minute. It that ok?
- For the document type user_account, when I try to build the terms panel
for the field "message.raw" (the string of 20-30 characters). My cluster
In the logs I can find the next
[2014-09-11 08:03:34,507][ERROR][indices.fielddata.breaker] [morbius] New
used memory 6499531395 [6gb] from field [message.raw] would be larger than
configured breaker: 6414558822 [5.9gb], breaking
But, despite of the breakers, when it tries to calculate that terms pie, it
stops indexing the input documents. The queue goes up. Then, it happens
that I see the heap exceptions and to solve them the only thing I could do
is to reboot the cluster.
My question is the next:
It looks like I have quite powerful servers and the correct configuration
(my ES_HEAP_SIZE is set to 15g), while they are still not able to process
the 1.5Tb of information or doing that quite slowly.
Do you have any advice of how to overcome that and make my cluster to
response more fast? How should I adjust the infrastructure?
Which hardware should I need to manipulate the 1.5Tb in the reasonable
amount of time?
Any thoughts are welcome.
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/707ed8a1-8f94-48cc-a78a-0e1f63f32b8d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.