One search query puts whole cluster on knees

If I try to perform a search (4 kibana visualisations on single dashboard) against 1 week worth of data (~200gb) of documents which don't have any analyzed fields, then first I get timeout, then every next search query timeouts too and basically any search operations are not possible for 3-5 minutes. Any hints how I can debug and\or improve it?

A naive guess is that you're running out of memory. Here's some basic steps to help debug the situation:

  • Look in the elasticsearch logs (usually in some place like /var/log/elasticsearch.) Often you can find output that can help, like finding lines that indicate the JVM has run out of memory.
  • Watch the stats apis while you're using elasticsearch. There are useful metrics there - you can also see high-level information by looking at heap usage from _cat/nodes, and how much of your memory is being used for field data caching by _cat/fielddata.
  • Use dashboards. Things like marvel, bigdesk, and elasticsearch-hq can all help visualize some of these concepts, so it's easy to open them, look at everything, load a dashboard, and watch what changes.

Once you find what's happening, remediating the problem becomes a little easier. For example, if performing aggregations/faceting is pushing the limts of your JVM heap, you can increase ES_HEAP_SIZE, use doc_values, or scale horizontally for more capacity.

One other thing that I noticed is that while CPU and Memory consumption are more or less the same, the disk reads skyrocket during this frozen zone.

Seems like it's rather CPU problem than memory. Any hints?

One extra question: if I have search queries (4 of them, 'cause that dashboard has 4 widgets), how can I run it and get some analysis on it?

Also, hot threads during this request: http://pastie.org/10184485 I am not a java expert yet, so it's a bit cryptic to me. Something related to aggregations?

It's beginning to sound like you may be bottlenecking on I/O (apologies if I rehash what you already know here, just writing it out explicitly for others.)

While your CPU % is a little jumpy, what sticks out is load. I can't say for certain what "normal" load is without knowing how many CPU cores are there, but high load includes time the CPU spends waiting for disk (iowait in the kernel's terms.)

This makes sense, as issuing a query that spans all your data requires that it all be read, and if you don't have enough iops to spare to get the bits, ES is going to get starved and unable to serve requests without being able to get the data quickly enough.

The next debugging steps are probably more OS-oriented. Some things I'd suggest exploring:

  • Use a tool like iostat to do something like "$ iostat -x 1" to watch exactly what your CPU is doing during these failures. If %iowait is constantly high, your disks probably can't serve requests quickly enough.
  • Try using elasticsearch-hq to do some debugging and look at the diagnostics panel. It'll help highlight bottlenecks and clarify what parts of the stack are slowing things down.

If it is disk I/O that's causing problems, there's things you can do to resolve this on both the OS-level (raiding, SSDs, I/O scheduler tweaks) and elasticsearch (scale horizontally, application raiding, warming, etc.)

Alright, thanks a lot for reply. I'm using r3.large EC2 instances (4 of them) and each has ~70gb of data on it. Perhaps I should provision more IOPS then. So the problem is that ES tries to read lots of data from disk and because of it request takes so long, right? And adding faster disks with more IOPS is going to help?

I'd definitely confirm that I/O starvation is the culprit before spending time/money on more IOPS. :slight_smile: But yes, you're right, with all that data on-disk, it needs to be read for your dashboards, and if you can't read quickly enough, things will start timing out.

Note that Elasticsearch attempts to resolve some of these problems for you - the definitive guide has good reading on this. For example, warmers (which are kind of an older technique) can be used to pre-load some data so it's ready for querying.

2 Likes

So yes, iowait jumps to at least 60% and tops at 90%. I've already provisioned 3 times more IOPS and issue is still present. i didn't experiment with warmers yet though.

The good part is that after first timeout consequent queries (same dashboard) take something like 9-11 seconds, which seems to be okayish for ~200gb of data, right? Or could be better?

I'd suggest trying to optimize until you're happy with it - if issuing a warmup query like that followed by other queries is acceptable though, then you could call it done and move forward with that.

However, depending on what your resident data looks like, you could also spread out the read load a bit by scaling out. If you have a sufficient number of shards, adding additional nodes to your cluster can spread out the data and give you additional disks to divide the data between. Elasticsearch will balance the shards such that you can spread the query load across multiple machines and give you more bandwidth for I/O.

There are costs here as well once things get more distributed, but it's all about testing it out to see what works well for your data and use case.

1 Like

Thanks for awesome support :slight_smile: I will experiment and see over time how to further optimise it.

If this isn't sorted yet Kirill, the following may help

  1. Java - Give it as much memory as possible... normally around 70-80% of availabe CPU. It helps in the long run.
  2. By 'not analyzed' fields, do I assume yoou mean you have no actual mappings , but are relying on teh default methods? I would seriously suggest using mappings for your indees, it helps ES index the fields correctly for speed of retrieval.
    I regularly have a dashboard with 15 odd Kibana graphs open, with the option of table data also, querying up to 120GiB data in around 5-10s total. I also had the issue of ES cluster crashing (although that was 18 months ago), until I increased the Java Memory.

Hope all is going Ok now.

Peter

Re point 2: so basically create mapping for each field, and then this mapping will look like a schema? I turned off analyzed mostly because I don't need any fields to be split into tokens :slight_smile:

Ok .. that sort of makes sense. However you are then relying on ES to pick
up all values from the 'message', which is like doing a fulltext search,
without any clues as to where that text may be. Its a long winded way of
doing it.
That said, without knowing your use case its hard to say where
mapping/analyze could help. I understand not wanting tokens for
everything,, but your 'most queries' items could help with a mapping.... do
oyuur slow logs show anything?

Peter C