We are installing the elastic search on docker machine. After creating near about 300 indexes where each index has near about 20-50 documents which sums to a total of 12000 documents, we are not able to fetch the data in search request or sometimes it becomes very very slow. How can we get ride of this issue?
Do we need to change the way we install the elastic? Please suggest.
Why create so many indices for such a small number of documents? That sounds very inefficient. I would recommend using a single index instead.
Currently this how we implemented the elastic, we are creating the indices dynamically for requests. Also we are installing the elastic inside kubernetes cluster. Do you think we should install elastic separately and should also have multiple instances, will that solve our problem.
Having a vary large number of small indices and shards will not scale, so if you are expecting this to scale I would recommend reconsidering how you are using Elasticsearch. If you can explain what you are trying to achieve with this arrangement someone might be able to help find a better solution.
we have sort of git concept of feature branch. when a user creats a business request we create seperate indices for all type of documents for that user and index the documents. Also for the same user we have three stages of documents like local,feature and master. we are creating indices on each level for all types. That is why we are ending up with more number of indices.
We thought horizontal scaling would be faster. we also faced the issue on how to delete all the documents of all type from a branch. so we planned to have separate indices so that simply deleting the index would work.
could you suggest how we can acheive this requirement
I suspect your solution may work during development, but it is likely to fail once you start adding lots of users and the number of indices/shards grow. I would recommend trying to minimize the number of indices. Have you tried keeping it all in one index and measure how long it takes to delete documents instead of dropping an index? Assuming that the documents will live in the cluster for some time, this may be the better solution.
I would recommend testing any solution you come up with at scale under as realistic conditions you can before deciding which approach to take. Basing this on assumptions and going down the wrong path could cause you a lot of problems down the line.
Ok Thank you so much ..we will try to have less number of indices. Could you please tell how many indices we should have maximum with good performance at a tome and also how many shards per index. Please also comment on if we need to have multiple nodes of elasticsearch for better load balancing
That depends on the use case and how you access and update the data, so you will need to test and benchmark to find the ideal number.
Ok Thanks a lot. So as per whole discussion with i understand we should try to have less number of indices
Yes, a lot less indices.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.