If we follow your approach it means we should divide our 500000 users into 500 indexes including 1000 users for each one. Is this OK or you consider number of indexes as an ant-pattern yet?
@Christian_Dahlqvist previously we tested many indices approach and it slowed down the server startup significantly, If I remember(I can't say 100%) it was Elasticsearch advice to use one index and using multiple shards instead of multiple indices and use routing to route each request to one shard.
So we use one index and 100 shards, Since you don't allow adding shards later so we considered 100 to cover our future needs; And it seems to be working fine except this Alias issue.
Right now the solution seems to be sending routing and filters by each request(If you guys have better solution please let us know)
But about the immutable Map you use for keeping Aliases, I see these solutions:
-
you can use something like ConcurrentHashMap instead of ImmutableMap, It is a little slower than Immutable HashMap but I think it could be acceptable. Adding to ConcurrentHashMap has O(1) in compare with O(n) for setting up the ImmutableMap.
-
Use 2 maps, one ImmutableMap that you setup at startup and use one ConcurrentHashMap for the new incomming alises, On each request you can check first the immutableMap and if the entry is missing there you should check the ConcurrentHashMap that holds new Aliases. The solution will have the best performance for both read and write, and after a while when we restart Elasticsearch servers most of Aliases will reside into the ImmutableMap and there would be less hit on ConcurrentHashMap.
Do you think using these solutions you can fix the slowness of adding new aliases in the next releases?
@Reza @mahdi_malaki Having a large number of very small indices is also inefficient and considered an anti pattern. I suggested using a reasonable number of indices, not necessarily many.
I would suggest you take a more structured approach to determining the ideal number of indices/shards required. A shard in Elasticsearch can generally handle a substantial amount of data. I often see shard sizes ranging between a few GB to a few tens of GB in size. The ideal size for your use case will depend on the data as well as the types of queries and latencies it need to support.
Determining the ideal shard size is generally done by creating index with a single shard. You then index a set of documents, e.g. 1 million and then run your queries against this index while measuring latencies. Then repeat indexing and running queries in order to see how latencies vary depending on the size of the shard.
Once you have determined how many records you can handle per shard, you can determine the number of users a single shard index can support by dividing this by the average number of records an average user is expected to have. Try to keep the shards as large as possible.
Whether this results in 5, 10, 50 or 100 indices is impossible for me to tell. If you perform this sizing e.g. for the first 100000 or 1000000 users, you can assign this range of user ids to this 'pool' of indices. Once you exceed this number of users, you can create another pool of indices and assign this to the next range of users. This will allow you to grow the number of indices over time rather than try to set it up for some future volume that may be hard to predict.
Although it may be interesting to look at different ways to make the handling of aliases more efficient, I don't see why changing something that works for the vast majority of users and is well tested would be a priority at this point, especially since there seems to be broad consensus that your usage is an anti pattern. If you however still feel this is important for you, feel free to create a pull request.
@Christian_Dahlqvist Thanks for reply, is there a document in Elasticsearch documentation that describes best practices and Anti-patterns in the Elasticsearch?
Is it best practice to have couple of Indices that each Index holds only one shard? Is my understanding correct?(If so I don't see what is the benefit of Shard in Elasticsearch!)
You can have fewer, larger indices by having multiple shards per index. I simply turned your initial model with one index and 100 shards on its head to illustrate an alternative. I suggest you perform some benchmarking to find out what works for your use case, as there are very few guidelines that apply to all use cases.
As I said we have 500,000 users and average indexed data is about 2 gig. According to your advice ,I'd keep size of each index around 10 gig. So, it means for each index we have just 5 users. In the manner, number of indices should be 100,000 single shard indices. And this is an anti-pattern!!!!
Please based on the these given info, suggest a good-pattern schema....
If you have 2GB of indexed data per user, which I did not see mentioned anywhere in the thread, you definitely want each index to have multiple shards in order to keep the number of indices down to a reasonable number. As you previously stated you thought a single index with 100 shards would be sufficient, I assumed each user had less data than that.
500000 users with 2GB of data each totals around 1PB if I calculate correctly. Add 1 replica to that (if not already considered) and you have 2PB of data. To handle that amount of data you most certainly need to perform some benchmarking and capacity planning to determine what size of indices and shards that will work for your use case.
Neither of these preserve immutability, which is an essential property of internal metadata and will not be relaxed.
@jasontedor of course it doesn't preserve immutability, What is the reason of being immutable here?
@mahdi_malaki can you elaborate how your cluster design looks like? If you have 500k users and each user data allocates 2g index size on average, you have most certainly other challenges than creating index aliases. From my calculations you need several thousand servers that fill whole buildings, assuming a single server can hold around 1TB data.
Assuming that, I would suggest to create hundreds of clusters, where users are grouped by the application (maybe by the first letters of the name) into smaller, more manageable units. That depends also on your requirements because it would be only feasible if each users is restricted to search in its own data. That is unclear to me. A search over all user's data would be very impractical - maybe with tribe node, but I'm not sure. Searching hundreds of clusters at once is a pain, it is also an anti-pattern.
I previously explained that this metadata is immutable, you proposed ideas that do not maintain that immutability and asked if it could be included in the next release. I explained that it can not because it does not preserve immutability. I'm genuinely unsure what you mean by "of course"?
There are many advantages to being immutable. The most salient fact here is that the internal cluster metadata is shared across threads, can only be mutated on a single special thread, and those mutations can not be made visible to other threads until cluster state publication has successfully occurred.
The immutability will not be relaxed.
OK then if I understand correctly the requirements are as follows:
-
Keep one map that is immutable to all threads except one
-
Add the new aliases to the map by order of o(1) and not o(n)
Do you think it is impossible logically to apply both requirements of 1 and
2 on one structure?
These are not the requirements. The data structure itself must be immutable. As I explained previously, mutations to it can not be made visible to other threads until cluster state publication has successfully occurred. This means that if an alias request comes in, a cluster state update request will be sent to the master, the master will make a new copy of the internal cluster metadata with the alias added, the master will then attempt to publish a new cluster state that includes the new alias, and only if that publication is successful will the master update its copy of the internal cluster metadata with the new cluster metadata. At all moments before that, the previous cluster state is what must continue to be used.
Let me at least clear something, do you claim that it is technically
impossible to add an alias to one elasticsearch node by o(1) and not o(n)?
I've only explained why it is linear the way that it currently is.
Thanks, I understand that, but I hope that you can find a solution that
works faster, aliases could be added on demand and it is not as static as
schemas are for example.
Right now it seems that we should do something else since our client gets
timeout after 30 seconds for adding an alias on 2 nodes.
I should note that there are 125000 aliases and there were some pending
tasks for adding aliases.
If we do anything (and I'm not saying that we won't, I have some ideas), it will not be aimed at making creating aliases faster, it will be aimed at improving the current immutable map implementation in general and benefiting aliases will just be a side effect. This is quite large in scope (a lot of the internal cluster state is represented by immutable maps) and I'd rather attack the general problem.
Oh good, let us know if you plan it for next releases.
But I guess we should apply an alternative plan for now.
The work that I have in mind will be done on the 5.x series and (if accepted) I have no intention of backporting to the 2.x series; broad changes should not be backported to stable series.
Yes; I maintain that using so many aliases is an anti-pattern, and it will continue to have limitations.
Curious: are you thinking of switching to persistent data structures like https://github.com/GlenKPeterson/UncleJim?