Discover: internal server error

Hi,

when i do a query on 'Discover' tab of kibana, i get a following error:

Discover: Internal Server Error

 Less Info
OK
SearchError: Internal Server Error
    at http://my-elk:5601/bundles/kibana.bundle.js:2:520531
    at processQueue (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:199687)
    at http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:200650
    at Scope.$digest (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:210412)
    at Scope.$apply (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:213219)
    at done (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:132717)
    at completeRequest (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:136329)
    at XMLHttpRequest.requestLoaded (http://my-elk:5601/built_assets/dlls/vendors.bundle.dll.js:293:135225)

Elasticsearch logs doesn't throw any error during this query.

Elasticsearch and Kibana version: 7.0.0

Thanks.

Hi there, thanks for posting here for help!

I'm going to need a bit more information about what's going wrong here in order to help you fix this. Could you post the details of the request error from your browser's dev tools? If you're not familiar with these, this tutorial may help. We're looking for the "preview" of the response that failed (should be highlighted in red in the Network Tab).

Additionally, do you see any errors in the server logs for the Kibana server?

Thanks for the reply @joshdover,

Here's the error from browser dev tools:

Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'nonce-wUN0r11gGb3VgSh0'". Either the 'unsafe-inline' keyword, a hash ('sha256-SHHSeLc0bp6xt4BoVVyUy+3IbVqp3ujLaR+s+kSP5UI='), or a nonce ('nonce-...') is required to enable inline execution.

I don't think the above error is helpful in this regard, but I only see this error in dev tools tab of my browser.

And also i keep getting timeout error too for only some queries while the queries which match much larger hits executes fine.

Discover: Request Timeout after 60000ms

SearchError: Request Timeout after 60000ms
    at http://my-elk.vse.rdlabs.hpecorp.net:5601/bundles/kibana.bundle.js:2:520531
    at processQueue (http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:199687)
    at http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:200650
    at Scope.$digest (http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:210412)
    at http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:212944
    at completeOutstandingRequest (http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:64425)
    at http://my-elk.vse.rdlabs.hpecorp.net:5601/built_assets/dlls/vendors.bundle.dll.js:293:67267

In above case too there is no error in elasticsearch logs. and browse dev tools too doesn't show any error.

The overall search experience on Kibana has been very bad for me till now. Almost always Kibana hangs and needs a reload. (It might be party because we are witnessing mapping explosion) but does any other factors contribute to bad experience on Kibana?

Thanks.

It might be party because we are witnessing mapping explosion

This sounds like it's most likely the issue. What's causing this mapping explosion? Are you using a broad index pattern that matches many indices with different mappings?

It may also be a problem your Elasticsearch configuration.

  • It's possible you're running out of JVM Memory or maxing out the machine's CPU. Do you see any indications of this on the Elasticsearch cluster?
  • It's also possible that one or more of your indices has too few shards which is limiting Elasticsearch's ability to parallelize the work. I recommend reading this blog post about how to tune your shard configuration for optimizing performance if it looks like this is an issue.

What's causing this mapping explosion? Are you using a broad index pattern that matches many indices with different mappings?

Yes, we have a relatively broad index pattern. But that is required as all the indices of that pattern store similar data. We have lot of fields per index too. currently we have around 5000 fields. We are planning to solve this by using nested fields. Is there any other better way? Can nested fields (which fits correctly to our requirement) solve this problem effectively?

  • It's possible you're running out of JVM Memory or maxing out the machine's CPU. Do you see any indications of this on the Elasticsearch cluster?

We have around more than 65GB of heap configured for elasticsearch. I checked CPU usage which is a bit high but reasonable (average load 2.5) only on master. All other nodes have load average less than 1.

Talking about resource constraint i see below error in elasticsearch log some times:

Caused by: java.lang.IllegalArgumentException: ReleasableBytesStreamOutput cannot hold more than 2GB of data
at org.elasticsearch.common.io.stream.BytesStreamOutput.ensureCapacity(BytesStreamOutput.java:156) ~[elasticsearch-7.0.0.jar:7.0.0]

I don't completely understand above error.

It's also possible that one or more of your indices has too few shards which is limiting Elasticsearch's ability to parallelize the work.

We have 2 shards per index and average shard size is around 8-10 GB. total number of shards in cluster is 140. Is it possible that some of the large over grown shard (2 of my shards size 30G) cause some issue? Other than these all my shards are well below 10 G.

I think it's reasonable to have 10GB shards.

So I'm not totally sure whether the mapping explosion or 2 of the over grown shards causing the timeout issues and long queries (with Kibana hanging).

We are planning to solve this by using nested fields. Is there any other better way? Can nested fields (which fits correctly to our requirement) solve this problem effectively?

Nested fields won't reduce the number of fields Elasticsearch has to search over. Each indexed field will still count as a field, regardless of whether or not it's nested.

5000 fields is high, but not unprecedented. Metricbeat for example uses over 4000 fields by default, though not all are populated.

Caused by: java.lang.IllegalArgumentException: ReleasableBytesStreamOutput cannot hold more than 2GB of data
at org.elasticsearch.common.io.stream.BytesStreamOutput.ensureCapacity(BytesStreamOutput.java:156) ~[elasticsearch-7.0.0.jar:7.0.0]

This does indicate that some search responses are exceedingly large. Do you see this happen around the same time that the errors in Discover show up?

Admittedly, we're reach the end of my Elasticsearch performance knowledge. Let me find someone who knows more than me to help out here :slight_smile:


For the first error you asked about (SearchError: Internal Server Error) does this happen consistently? It could indicate an issue with permissions if you're using Security.

@elk11 Could you post the full stack trace of the error you're getting?

30G shards won't be an issue - in fact, we recommend roughly 50G shards, and the only reason not to go to more like 100G or 150G is that moving shards between nodes (e.g. for recovery in the event of a node failure) will take longer.

Is that 65G of heap per node? That's much higher than our recommendations - this blog post is a couple years old, but still a good explanation of why we recommend ~30G or smaller heaps for Elasticsearch. In fact, since then, we've moved some things off-heap so it's even more important to have a good amount of memory available outside of the configured heap space. That won't be the cause of the particular search errors you're posting about, but more of a general performance recommendation.

1 Like

@joshdover

Nested fields won't reduce the number of fields Elasticsearch has to search over.

I have many fields like 'common_part.var_part: value'. I was planning of making a nested field called 'common_part' which is common across all those fields. Will that not reduce the number of fields? I followed Six Ways to Crash Elasticsearch | Elastic Blog blog.

For the first error you asked about ( SearchError: Internal Server Error ) does this happen consistently? It could indicate an issue with permissions if you're using Security.

No, it doesn't happen consistently. And i have not setup any security.

@gbrown

Could you post the full stack trace of the error you're getting?

[180570] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [my-elk01.vse.rdlabs.hpecorp.net][172.17.0.2:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: ReleasableBytesStreamOutput cannot hold more than 2GB of data
        at org.elasticsearch.common.io.stream.BytesStreamOutput.ensureCapacity(BytesStreamOutput.java:156) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput.ensureCapacity(ReleasableBytesStreamOutput.java:70) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.io.stream.BytesStreamOutput.writeBytes(BytesStreamOutput.java:90) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.CompressibleBytesOutputStream.writeBytes(CompressibleBytesOutputStream.java:85) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.io.stream.StreamOutput.write(StreamOutput.java:474) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.bytes.BytesReference.writeTo(BytesReference.java:92) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.io.stream.StreamOutput.writeBytesReference(StreamOutput.java:206) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchHit.writeTo(SearchHit.java:227) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchHits.writeTo(SearchHits.java:120) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.fetch.FetchSearchResult.writeTo(FetchSearchResult.java:102) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundMessage.writeMessage(OutboundMessage.java:70) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundMessage.serialize(OutboundMessage.java:53) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundHandler$MessageSerializer.get(OutboundHandler.java:107) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundHandler$MessageSerializer.get(OutboundHandler.java:93) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundHandler$SendContext.get(OutboundHandler.java:140) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundHandler.internalSendMessage(OutboundHandler.java:78) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.OutboundHandler.sendMessage(OutboundHandler.java:70) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:738) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransport.sendResponse(TcpTransport.java:722) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TcpTransportChannel.sendResponse(TcpTransportChannel.java:64) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:54) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:47) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.action.support.ChannelActionListener.onResponse(ChannelActionListener.java:30) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService$3.doRun(SearchService.java:380) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]

Is that 65G of heap per node?

No, it is across cluster.

I'm getting those timeout errors i mentioned above many times. And it happens to only particular kind of query. A much bigger queries (in terms of returned results) get executed fine. And there are no error in elasticsearch logs particulary when those timeout occurs.

Although i sometimes see the below error during some timeout errors:

Caused by: org.elasticsearch.index.query.QueryShardException: failed to create query: {
  "bool" : {
    "must" : [
      {
        "query_string" : {
          "query" : "key: \"field\"",
          "fields" : [ ],
          "type" : "best_fields",
          "default_operator" : "or",
          "max_determinized_states" : 10000,
          "enable_position_increments" : true,
          "fuzziness" : "AUTO",
          "fuzzy_prefix_length" : 0,
          "fuzzy_max_expansions" : 50,
          "phrase_slop" : 0,
          "analyze_wildcard" : true,
          "escape" : false,
          "auto_generate_synonyms_phrase_query" : true,
          "fuzzy_transpositions" : true,
          "boost" : 1.0
        }
      },
      {
        "range" : {
          "timestamp" : {
            "from" : null,
            "to" : null,
            "include_lower" : true,
            "include_upper" : true,
            "boost" : 1.0
          }
        }
      }
    ],
    "adjust_pure_negative" : true,
    "boost" : 1.0
  }
}
        at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:309) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:292) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.parseSource(SearchService.java:755) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.createContext(SearchService.java:608) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:583) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:386) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.access$100(SearchService.java:124) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:358) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:354) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]
Caused by: java.lang.IllegalArgumentException: field expansion matches too many fields, limit: 3000, got: 3387
        at org.elasticsearch.index.search.QueryParserHelper.checkForTooManyFields(QueryParserHelper.java:161) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.search.QueryParserHelper.resolveMappingField(QueryParserHelper.java:154) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.search.QueryStringQueryParser.<init>(QueryStringQueryParser.java:140) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.QueryStringQueryBuilder.doToQuery(QueryStringQueryBuilder.java:860) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:99) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.BoolQueryBuilder.addBooleanClauses(BoolQueryBuilder.java:394) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.BoolQueryBuilder.doToQuery(BoolQueryBuilder.java:378) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.AbstractQueryBuilder.toQuery(AbstractQueryBuilder.java:99) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.QueryShardContext.lambda$toQuery$1(QueryShardContext.java:293) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:305) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.index.query.QueryShardContext.toQuery(QueryShardContext.java:292) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.parseSource(SearchService.java:755) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:583) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:386) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService.access$100(SearchService.java:124) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:358) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1069) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-7.0.0.jar:7.0.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-7.0.0.jar:7.0.0]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
        at java.lang.Thread.run(Thread.java:835) ~[?:?]

To solve this i updated my settings to increase to field limit to 4000, but i still get the same error with 3000 as the limit field expansion match. Is there a different setting to set the limit on field expansion match? if there is what's the default value? because i never configured such a thing but it shows in the error that it is 3000 (so i thought by default it'll take the value of limit on number of fields)

From that stack trace, it looks like the returned result set is larger than 2G, which is larger than the amount that Elasticsearch can return in a single request. When do you see this happen? Does this occur when you're making a particular query in the Dev Tools? I would be a bit surprised if Kibana is trying to get that much data all at once. If you want to retrieve very large amounts of data like this, check out the Scroll API which can be used to read the results of a search out across multiple requests.

For the second error you posted, with this error message:

field expansion matches too many fields, limit: 3000, got: 3387

That's related to this breaking change. This isn't super well documented - I'll look into fixing that - but it's briefly mentioned in the Query String Query docs. Set the index-level setting index.query.default_field to an array of fields that you want the Kibana search bar to search by default. This should probably include any text fields, and may include keyword fields, but is less likely to include numeric or geospatial fields, for example, but you can set it to whatever fulfills your needs.

Note that there's also an Upgrade API in Kibana that helps out with adding this setting to indices.

From that stack trace, it looks like the returned result set is larger than 2G, which is larger than the amount that Elasticsearch can return in a single request. When do you see this happen?

@gbrown, I agree that the result set might be larger than 2G. But queries which are much larger than the failing query gets executed without any error. (Failing query is only subset of some of the passing queries).

Does this occur when you're making a particular query in the Dev Tools?

No, I see that error (Timeout error and internal server error) when making a query from 'Discover' tab of Kibana.

That's related to this breaking change . This isn't super well documented - I'll look into fixing that - but it's briefly mentioned in the Query String Query docs .

Thanks for these info. I was unaware of these. But I don't completely understand what queries use 'automatic expansion of fields'. As I'm understanding, when we just search a text without specifying field name, the text is by default searched in all fields. Am i right? If i am, this should not be a problem in my case because in all my queries i specify which field to search on. For ex in kibana discover tab i use:

key: 'value'

kind of queries. I even tried with 'filters' on the Discover tab. no luck. So I'm assuming here that i don't need to set value for index.query.default_field if my queries specify which fields to search on in a pattern like above.

To be very sure about automatic fields expansion is not the problem, i performed a query using 'filter' on 'Discover' tab of Kibana. I used .keyword field this time.

key.keyword: value

In this case, I don't get any error in elasticsearch logs, but i still get 'Timeout error' in Kibana.

@joshdover, I noticed very strange behavior here in Kibana, The query on either normal field or keyword field in Kibana throws 'Timeout error' in Kibana. but there is no error in elasticsearch logs.

But the queries on both normal field or keyword field in search bar of Kibana throws error in elasticsearch logs (field expansion limit error) in addition to 'Timeout error' in Kibana. I almost always used both filter and search bar interchangeably. Was i wrong in doing so? I always expected both to give same result on condition that i specify field name in search bar too.

I'm really confused now.

Odd - do you have some documents that are much larger (either because of one very large field or having many more fields) than others?

The only queries that use automatic field expansion are Query String (query_string), Simple Query String (simple_query_string) and Multi-Match (multi_match). I think Kibana's Discover uses query_string by default, so that's what you're running into here.

I'm not 100% sure if this is true for query_string without trying it but I think this is correct. However, I would still set it for indices with a large number of fields for the convenience factor and to prevent queries from accidentally failing if you don't specify a field.

Kibana has a timeout for how long it will wait for a result from Elasticsearch, which defaults to 30 seconds. If a query takes longer than that to return, Kibana will show a timeout error, even if Elasticsearch is able to successfully complete the query after the timeout. You can adjust this with the Kibana setting elasticsearch.requestTimeout (docs here).

I'm very surprised that a simple query from the Discover tab is timing out though - my guess is that either your cluster is significantly underpowered for the amount of work it's doing, or your documents are extremely large, or both.

Yes, That's most certainly the case. but i do not know to get the size of the documents stored in an index. Is there a way i can get the size of documents stored in elasticsearch?

You can adjust this with the Kibana setting elasticsearch.requestTimeout

We currently have this set for 1 minute. Do you think we should increase this?

my guess is that either your cluster is significantly underpowered for the amount of work it's doing, or your documents are extremely large, or both.

Does 65GB of heap is less for a cluster with 140 shards of 8GB (avg) each? I suspect my documents are large. but i don't know how to prove that (ie to get avg size of documents in an index)

There's not a direct way, but you can find the average document size by using the cat indices API:
GET /_cat/indices?v
And dividing the pri.store.size column by the docs.count column. This is just an average, but may provide some insight.

This is up to you - how long are you willing to wait for queries to be returned? There's no inherent harm in raising the timeout, but it will cause Kibana to wait longer for results. It also may encourage users to run long-running queries, which can result in more load on your cluster, but that's only a possibility and depends on the people using Kibana.

That's getting a little tight, but likely not enough to cause errors like you're seeing unless there are other problems (like document size). If your data is growing I'd look into increasing the size of your cluster soon, or reducing memory usage by freezing or removing old data. Doing so may help with the current issues, but it's not a guarantee.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.