I found that the index in 5.0 has 8 segments(.doc) with the same size.
The count of cfs files is about 100. And what's more, the maximum length of .cfs is 5GB. It seems strange, I think they should be merged into bigger ones?
(I asked this question in a new post:
How to choose/change the maximum size of a segment?)
While in 1.3, it is a huge segment ( I optimize the segments to 1 about two months ago) and about 20 smaller ones.
Maybe that's the problem?
I processed a forcemerge yesterday, but it seems still running.
I checked the documents, but did not find settings that controlling the merge progress, etc. when should it start, how many segments after merge.
In fact, we are using es indexing news crawled by our program. Each day, 300 thousands records are added into the cluster. Complex search requests are mainly posted to the recent data (60 days). At the same time, we need to count the records containing certain terms. Averagely, 2000 count requests every minutes (I built a cache for the results, but there are too many terms).
So I build two indexes, one for the recent data and one the whole data (handling the count requests).
Could you give me some advance to make the structure better?
btw: I conduct a separated test these days. The search speed is almost the same in both 1.3 and 5.0. The test indexes use SmartCN analyzer, and deleted a field which is not indexed but extremely large. All the other settings are the same as that in product machines. So now the only difference seems to be the segments number.