Max number of segments in elasticsearch

Is there a parameter in elasticsearch to set the max number of underlying segments in the whole cluster? Or is there any parameters about segments?

I know there's a max_num_segments when using the optimize API, but I'm looking for parameters that controls segments in general.

For example, is there a parameter to control how many segments you can have in each index? or how often to merge the segments (instead of waiting a segment become too big)?

Appreciate!

1 Like

Sort of. Typically you can get close to controlling how segments are laid out with the merge policy. Which is deprecated, and in fact almost all these settings are removed in 2.x in favor of an auto-throttling API.

Generally speaking is, best practice is not focus on narrowing down to just one or few segments. The tiered segments have advantages with parallelism and "graduating" long-lived documents to bigger/more stable segments (much like a garbage collector).

Is there a higher level problem you're trying to solve? Like indexing/search is slow, etc?

Yeah basically I have a giant index (300M rows and about 500G all primary shards combined), and I'm constantly updating (80%) and inserting (20%) into it.

The concern is, when the index grows too large, we're gonna have too many segments or too big segments, which leads to a performance decrease. So I'm trying to figure out if this will be an issue, and how I can resolve it.

Thanks for the quick response!

You might be better off splitting the index?

1 Like