My cluster gone mad after customising http.max_content_length (256mb) in
order to be able to index bigger file (default is 100Mb) through HTTP api.
It didn't work so I reverted to the default value, but the cluster is not
healthy anymore:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4
}
Here is an extract of the log:
I would like to investigate/debug further on the following error:
IndexNotFoundException[no segments* file found in
store(least_used[rate_limited(mmapfs(/data/data/elasticsearch/nodes/0/indices/docman/0/index),
type=MERGE, rate=20.0)]): files: []
However I don't know how/where to start. I tried to remove segments.gen
since I found similar issue on the internets, but I don't have any file
named segments.gen...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.