I have Elasticsearch cluster version 5.1.2 on windows 2012 r2 server.
One of the servers in the cluster having an issue that I don't know how to solve... (i'm new at the elastic)
In the log file it seems like that I have a request that is too large.
path:"Request Path" params: {index=Index name, type=ssoelasticlogger}
org.Elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<http_request>] would be larger than limit of [23960249958/22.3gb]
That is a very, very old version. I would recommend that you upgrade as a lot of improvements have been made in the last few years.
It is generally important that the heap is set to below 32GB so that compressed pointers can be used. You should be able to see this in the logs at startup. At least you do in more recent versions.
To get a better understanding of the cluster, can you provide the full output of the cluster stats API?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.