hi
recently i am suffered with this problem while i am using an index and here is a summary of this index
ES: 5.5.2
docs: ~1 billion
shards: 3
indexing rate: < 100 TPS
structure: simple, long/keyword/date
query: simple, 99.99% just do a terms query on _id, and here is an template example
==========
{ "size": {{size}}{{^size}}300{{/size}}, "query": { "bool": { "filter": [{ "terms": { "_id": {{#toJson}}array.list{{/toJson}} } }, {"script": { "script": { "lang": "painless", "params": { "now": {{now}}{{^now}}null{{/now}} }, "inline": "def now = params.now; if (now == null) { def d = new Date(); now = d.getTime(); } now+=28800000; return doc['time_from'].value > now && doc['status'].value==1;" } } } ] } } }
========
i already noticed that there is a question on this topic( Latency spike after big merge), however mine is a little bit different. because i do want to have a range query to avoid this problem, but the result still make me upset, the latency spike still happened.
the result is:
99% of the queries return within 5ms however still some latency queries return at around 200 ~ 1000 ms. the only suspect i guess is still related with the lucene merges.
do you think so ?