My index is not very large: 1 main index, 3M documents, 1 node, 1 shard, 0 replicas, 5GB disk size. Disk is spinning.
Documents have indexed fields: about 12 integer fields, 3 short text fields, 4 date fields, plus some more unindexed fields.
My use pattern is like this: every day about 10k-15k documents are added to the main index in a background job which lasts for about 4 hours. All queries are run against this index, 24h.
I am having many slow queries: about 30% of the queries are above 800ms and 7% above 1000ms.
The queries have filters on several integer fields, and aggregations to count documents on several integer fields as well, with occasional text search in one short text field of about 80-120 chars. The query has 3 nested aggregations, with facets on three integer fields.
I have set up all the recommended settings for production and spinning disks, and the refresh interval to 30s, as it is not critical to have new documents immediately available for search.
What I do see is that the number of segments is quite high.
"segments" : { "count" : 27, "memory_in_bytes" : 5027011, "terms_memory_in_bytes" : 3123095, "stored_fields_memory_in_bytes" : 1815368, "term_vectors_memory_in_bytes" : 0, "norms_memory_in_bytes" : 14528, "doc_values_memory_in_bytes" : 74020, "index_writer_memory_in_bytes" : 541878, "index_writer_max_memory_in_bytes" : 124688793, "version_map_memory_in_bytes" : 2086, "fixed_bit_set_memory_in_bytes" : 0 },
The machine has 16GB memory of which 7 have been assigned to ES heap . CPU load is very very low: less than 10%.
Disk I/O does not seem to be a problem: iostat reports tipically under 10% util with occacional peaks of 20%.
ES version is 2.3.3.
OS is Ubuntu 14.04.
Any tips? How can I detect if there's a problem?