ES size_in_bytes massively increasing for one node, along with search latency

Hi all,

We saw an interesting issue today in our ES cluster -- at around 11am, we
saw the size_in_bytes (from _stats, under store) on one node rocket whilst
it shrunk on all the other nodes, and during this period, query latency on
this node alone was much higher than all the other nodes, with several
worryingly high peaks.

Some info on our cluster:

  • Running version 0.90.4
  • There are 5 nodes -- 1 was the master at the time
  • We have around 50 indices, each one has 5 shards, with replication
    factor 1.
  • Running on latest Oracle JVM, version 1.7

The graphs below show the change in size_in_bytes on all nodes along with
the query latency (from a simple curl) and the document count on each node.

I've checked the which indices we were writing to in the period and there
is no bias for writing into particular shards residing on any node more
than another.

There's nothing in the logs with the default logging.yml to cause suspicion.

Does anyone have any idea what might have caused this? I've got plenty of
other data collected so can provide other stats if you think it would help.

Many Thanks,
Shaun

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/02cc208c-91ed-44d8-ba01-54e6bbc93df0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.