Es 5.0 GET API Breaking Changes

Hello all. I'm wondering if someone can give me some further information about the following Breaking Change:

As of 5.0.0 the get API will issue a refresh if the requested document has been changed since the last refresh but the change hasn’t been refreshed yet. This will also make all other changes visible immediately. This can have an impact on performance if the same document is updated very frequently using a read modify update pattern since it might create many small segments. This behavior can be disabled by passing realtime=false to the get request.

My questions are:

  • Wasn't the real-time get API a given? Meaning a cheap operation? Why did it change? Reliability?
  • How different is this approach from previous es versions when getting documents?
  • What has refreshing an index have to do with segment creation?
  • How frequent should an update on a document be to encounter this issue?
  • Does it happen on a per document basis or frequently updating documents from the same shard will result in this undesired behavior?
  • Can refreshing timings between shards potentially return stale results, even when using write_consistency: all?

My concern is that my use case makes frequent updates to nearly all documents in my cluster. Never encountered an issue with get requests up until 2.4.1 so I want to make sure I'm covered before moving forward with 5.0.

Those questions are just what I could get from the top of my head, so if any of you have any other questions or insights I'm happy to hear about them.

Thank you!

1 Like

Pretty much. It was always a weird hack and it was getting in the way of performance improvements we wanted to make to indexing. It is also allowed us to drop a bit of information from memory that we used to keep on every write. The hack that got it working made it produce weird or no results for some fields and we always hated that.

So, yes, realtime GET working is a given but being fast is a thing we were willing to sacrifice.

It uses the normal code path we use to read the document from the Lucene index rather than a custom code path to read data out of the translog. It is different, but not code that hasn't been used trillions of times.

Refreshing is the process that makes all the staged documents available for search. Making something available for search in Lucene means creating a segment. That segment might be created in memory or on disk. I've not done enough reading to know when it makes which choice but I'm fairly sure it only does it for new, small segments.

Refreshes by default are started every 1 second on indexes that have seen changes. If wait you 2 seconds between updates to the same document you shouldn't hit it.

Per document basis. We only need to GET the document for the _update operation. So if you avoid that API entirely you won't get it. Or you can wait 1.5 seconds before you update the same document. Or just live with a few more small segments that will be merged into larger segments fairly quickly anyway.

Shard copies don't move in lockstep with one another. If one refreshes to include a document it might be some time before another one does.

If you sweep through all the documents and then start over from the beginning you are very unlikely to hit this issue. If you are using Elasticsearch to store page hit counters or something then you probably will see this.

2 Likes

Great reply Nick! Got all my doubts sorted out and more. So I should expect a performance in par with a standard Lucene search.

I know my share of Lucene internals, but didn't know that refreshing had anything to do with segment creation. Thought that it only opened a new IndexReader, which just enumerated segments already available, managed by the IndexWriter.

So to ensure that GET brings me the latest and greatest I should hint my request to go fetch only from primaries, right?

Thanks a lot again!

I'm not sure it makes a big difference because if you hit a replica then it'll refresh just like a primary. Now, we do write to the primary first and then to the replica but we do that before responding to the request. So, yes, if you want the latest we have you'll need the preference to be _primary. But the difference isn't going to be seconds, more like milliseconds.