What effect does index_buffer_size have on performance?

index_buffer_size configured the amount of RAM available for indexing. However, what is that memory actually used for? Segments are stored in memory before being flushed (but that's controlled by flush_threshold_size).

We have noticed that increasing the index_buffer_size doesn't do much for us

It (helps to) determine how much data / how many documents will be buffered up before a segment is created (where the segment is basically the inverted index, doc values, term dictionary, etc). Basically, it's the buffer to hold documents that are not yet searchable.

There are several criteria that decide when this buffer should be converted into a segment: refresh interval, buffer utilization, staleness, number of documents, etc. Increasing the size won't necessarily improve performance, as you the buffer may be emptied before it fills based on another one of those heuristics (e.g. a second passes and the refresh interval empties it).

It's one of those things that you need "enough", but any more doesn't really help much.

The flushing/committing of those segments to disk is a separate process, controlled by flush_threshold_size.

So what params can I tweak to increase the the segment size?

Increasing the batch size for some reason results in more segments .

You really shouldn't mess with segment sizes. Those are very low-level details managed by Lucene, and all options that allow you to tweak it are being removed in 5.0. I've never seen a single user successfully tweak them better than the defaults... they always end up catastrophically destroying their cluster (either now, or at some point in the future when they've forgotten about the tweaking)

Changing the batch size will slightly alter segment count because you may hit different heuristics which cause segment generations. You may fill up the buffer faster, or perhaps the longer round-trip time causes the buffer to empty due to the refresh cycle. Of a set of larger documents trigger it, etc.

In general, it's not really something you need to worry about.

I suppose the real question is: why do you want to tweak these settings?

I'm just shooting in the dark here, but maybe refresh_interval is a thing to tweak.

Yea, that's a fair point.

trying to improve indexing performance. Got a 50% increase from playing around with batch size, flush_threshold_size, and flush_threshold_size. Want to see if we can do better - but I probably went too far.

Since I got your attention here, 2 related questions:

  1. I would expect ES to be disk IO bound when doing batch indexing, but it seems to be mostly CPU bound for us (we are at 3.7/4 CPU load). We aren't even using SSDs but AWS EBS. Any idea why?
  2. Any advice on how to reduce the impact of indexing on query performance. Only thing I could think about was to reduce the bulk threadpool size.

Indexing can be CPU-heavy due to merging, which is effectively a streaming mergesort + other functions. Most folks end up IO bound, but the ratio depends on the documents themselves (how many fields, how complex the analysis, etc). The CPU ratio can shift if you have complex docs.

Note, however, that your 3.7/4 load metric doesn't necessarily mean you're CPU-bound. Since it's still under 4, you're technically not running at max capacity (e.g. there are on average 3.7 processes wanting to use 4 cores at any given moment). An over-load would be something like 6/4, meaning 6 processes are contending for four cores at any particular moment.

Also, from the docs:

/proc/loadavg The first three fields in this file are load average figures giving the number of jobs in the run queue (state R) or waiting for disk I/O (state D) aver‐ aged over 1, 5, and 15 minutes. They are the same as the load average numbers given by uptime(1) and other programs.

You'll note that processes waiting on Disk IO are included, so you can see high load averages even with little CPU burn, because all the cores are waiting on IO to return.

A better metric would be to monitor CPU utilization as well as Disk utilization, such as watching IOPs and throughput. I'd hazard a guess that your disk probably are the bottleneck, and you'll see them chugging away at their close-to-max sequential throughput or IOPs.

Related to your second question, ES throttles indexing at the Lucene level. If it finds that index merging is not keeping up with indexing rate, it will automatically throttle back which causes backpressure and an increase in the queued bulk threads. So from an ES side, it should automatically prevent indexing from swamping queries.

On your side, you can try to provide a slower feed of bulks to ES, or attempt to smooth out bursts (e.g. instead of sending all the bulks at once, queue them up in your app and feed them to ES at a constant rate).

HI!

Index is very simple. We are essentially using ES and a Key-Value pair for now, only one field is indexed. Each doc is around ~1k when indexed (index size, #docs).

CPU util is 50%-70%. IO await is ~5ms which is not too bad, and disk IO is around 100IOPS.... yet bulk.queue size starts growing and requests get rejected. So not quite sure what is the bottleneck.

We never write to the same index we read from. The main effect of indexing is a very large increase in 95 percentile read latency (10ms to 120ms). But I suppose that makes sense - the reads that have to go to disk are slower because of disk IO contention.

We'll try to set index.merge.scheduler.max_thread_count: 1. Also working on a proper retry policy in he client, this way we can decrease the bulk thread count to limit indexing load without dropping requests