I have ~100GB data and would like to build a search application with the data.
The each document has a field like category id, besides ordinal text fields.
Users usually filter the documents specifying the category ids.
In this case, are there any benefits (such as improvement in performance) in dividing the data into multiple indexes by the category ids (like
When users specify a category ids, the application searches only from the indexes with the ids.
As another question, does the search performance get worse when the entire data size of a single index gets too large?
(If so, we have to consider dividing data into multiple indexes anyway)