1m doc per lucene file

This segment spy shown a production es cluster version 0.90.7 running. is 1m per lucene index file is the limit? should there be any concern if you look at this segment of this cluster? any comments/suggestion are welcome.

There's a hard 2 billion limit in lucene. Are you having problems with this?

(Also, upgrade, that's a really old version)

2billion? okay, clear.

Well, it seem like the segment stay at 1m doc and would not increase anymore, not sure what is the impact/problem (if any).

As for problems, well , as the data grow everyday, we encounter problem from time to time, the stability issue is a concern like cluster went into red state, etc. It need human intervention to restart the node, so the cluster state go back to green. Self healing did not happen in such cluster load, data loss. The less problematic is that the query search and query fetch took sometime to complete. Sometime cluster node shown stackoverflow, sometime high cpu usage, sometime out of memory, sometime code loop for 10 levels or more. file an issue in github last time.

Mentioned last time in the mailing list about upgrade, it is an unfortunate upgrade does not work.

Sounds like you need to add more nodes!

1 Like

other than java/elasticsearch upgrade and/or add more nodes, do you think is there any other way to improve stability? configuration changes , etc?

Why does an upgrade not work?

Tried to upgrade from java6 to java7 u72 for the production cluster, es client cannot talk to the cluster without no issue. iirc, it was serial exception , something like that, I asked in the mailing list, no solution given. so no java upgrade, no es upgrade neither...