Slow log threshold question

I'm looking through the documentation regarding the slow log as I'd like to use it troubleshoot some performance issues we've run into. I'm a bit confused about the thresholds that are suggested in the documentation as it seems like they're back to front. For reference, here are the docs I'm referring to: Slow Log | Elasticsearch Guide [8.1] | Elastic.

The search threshold breakdown in that doc is as follows: 10s 5s 2s 500ms 1s 800ms 500ms 200ms

This confuses me a little because it seems like it's assigning the "heaviest" logging to the lowest threshold, ie. the threshold that is most likely to be triggered.

To be fair, 200ms might be still a pretty high threshold in a lot of use cases but it still seems like you would want the heaviest logging to be saved for only those requests that are taking a really long time to be fulfilled.

I thought I'd post here because I feel like I may be misunderstanding the rationale behind these thresholds and I'm hoping someone can clarify what the intention is here before I go ahead and implement it in my own setup.

Trying one bump to see if I can get a response.

Hi @knightsg1 Welcome to the community.

Those setting above "Define the levels / threshold" for each logging level, and then you need to set the actual logging level like

index.indexing.slowlog.level: INFO (or WARN, DEBUG, TRACE)

Then the slowlogs that meet the INFO level will be shown and others below will not.

If you set to TRACE then any log above 200ms (for query will be logged) TRACE is a higher level of detail / tracing .... more queries show up.

See Here :

Hope that helps.

Ok, thanks Stephen, that was helpful. I guess I missed seeing the "level" setting (as opposed to the threshold settings) when I read through the documentation.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.