Is this a good way to combine 'Index per Time Frame' amd 'Index per User'?

Hello,

I want to provide different users with different levels of data longevity. ( eg user a only pays me to keep 1 month of data but user b want to keep 12 months)

In order to make this manageable I am planning to have an Index per month per user. I would have index names like 'user_a_2016_jan', 'user_b_2016_jan', 'user_a_2016_feb', 'user_b_2016_feb' etc etc

This way I can just delete old indexes on a per user basis. I plan to implement this as follows.

  1. Define a index template for the cluster with my mapping.

  2. Write indexing code that indexes to the appropriate index name and relies on auto creation to create the index. This way new customers will have new indexes created for them when they start sending data without any need for prior set up, and indexes will automatically roll over at the start of a new month.

  3. Perform any queries for a user by specifying a wildcard index. For example, for user a, I would use index: 'user_a_*'. This way queries for user a will run across all existing user a indexes. I would normally use an alias for something like this, but there doesn't seem to be a way to have a 'user_a' alias for autocreated indexes.

Given that I am building an analytics service and hence will make HEAVY use of aggregations, I have some questions.

  1. Is it advisable to use wildcard indexes like this rather than aliases? Am I going to be missing out on some important capability by not having aliases?

  2. This approach means that the number of shards I have is dictated by numbers of users, rather than explicit scaling decisions. If I have 200 users, then after a year I will have 2400 indexes. If I set each index to have one primary and one secondary shard, that means 5600 shards. Is this going to be a huge problem? what if I want to run aggregations against _all too?

  3. I have a number of fields that have a set of possible values of known cardinality. For example 'type' might be able to be 'foo', 'bar' or 'xxx'. My understanding is that this limited cardinality means these documents can be stored in the index quite efficiently. Am I going to lose that by having a lot of small indexes? ie is it going to require 200 times the space to index the documents across 200 indexes as opposed to one big one?

  4. any other gotchas I should be aware of?

Thanks
Perryn

  1. Yes this should work ok.

  2. No, this will suck. What about having per-time-frame index + routing by user-id ? You will have to delete old documents manually.

  3. It won't be 200 times, just more. It will depend on the number of segments.

  4. Yes, 2. Search on the forums for a good number.

Thanks very much for your reply :slight_smile:

Hmm yeah I was afraid of that. However doesn't routing by user-id mean that all the data for that user will be on one shard? Doesn't that mean that ES will be unable to break up heavy aggregations into something that can be run on multiple nodes in parallel?

Also I was trying to avoid deleting documents as there would be a LOT of documents to delete each month. My understanding is that doing this is so horribly inefficient (compared to just deleting a whole index) that it should be strongly avoided. Should I actually be considering it as an option?

You can create an index for each time-frame for each plan on your site. And put all users of a plan in that index and then easily drop the old indexes of some plans ?

And then when you search, you can search them all or have some logic so you know in which index the user will have data.

To split the aggregation on multiple shards, you can have a complex routing-value: example not just 'user-id' but 'user-id.1-x' (1-x will be computed from some other value (or just random?)), this way the routing-value for each user can go up to x shards (they may all end up in 1 though, depending on the hashing of the value).

Makes sense ?

Aggregation is single-threaded on each shard. If you make agg on multiple timeframes it will be parallel. If you want parallel agg on 1 timeframe, you have to split the users data in mutliple shards (by having different routing-values, or having no routing-value at all)

Yep, that all make sense.

Thanks ddorian43!

@ddorian43's advice is quite good, though I'd stay away from routing values like user-id.1-x. If a user gets large enough that you are concerned about hot spots you should migrate it to its own index. I probably wouldn't bother removing its documents from the original index because it isn't particularly efficient, instead I'd just let that month roll off the end.

Your strategy of having an index per user per month is actually just fine if the number of users stays around 200. 2.x is fairly good at handling decently large cluster state, if not amazing. You can use indexes with only a single shard for most users.

The thing to measure is the age of the oldest thing in the pending cluster tasks API. 30 seconds is when most requests start timing out, so you want to stay well under that. I'm not a huge fan of auto create indexes myself, so I'd have your system create them manually before they are needed. I just don't like surprises. And if you create them at a time when you are around, hours before they are needed, you can watch how long it takes for them to be created and debug any weirdness.

Anyway, the next strategy is the routing that @ddorian43 mentioned. I'd still use an index per user for your big users but then I'd use a single big-ish index for all the small users, giving each one routing. The trouble with this strategy is that you have to deal with users that go from being small to large. And your extra wrinkle is that you have to deal with data aging out. I think you'll probably have to experiment with it but ultimately you might end up with a script to migrate users from the amalgamated index into their own index.

A tip: sometimes it'll be more efficient to build a new, smaller index from a larger one rather than delete most of the larger one. I don't know what that point is, but if I were only keeping 5% of a big, big index I'd expect it to be faster to just reindex that 5% into a new index rather than deleting and _force_mergeing to reclaim space.

Another thing: if your retention period gets very long you'll want to think about upgrades. Elasticsearch can read indexes created in the last major version. So 2.x can read 1.x and 5.x can read 2.x (yes, we skipped 3 and 4) but 5.x can't read 1.x indexes. If your retention period is around 2 years you'll start to see this come up. You can reindex the old indexes or just not upgrade.

Don't be afraid to run multiple clusters. You (probably) don't want to run a cluster per user, but if you have a super duper huge user you might want to just give them their own cluster.