Discover: Request Timout after 30000ms / mapping size is larger than 10MB Version 6.3.1

I am getting the request timeout anytime i do a across all fields search in kibana, this used to work fine, it is only at a 15 minute timespan, but can't figure out what is going on.

It does seem maybe only my beats indices that has the issue (in two different separate clusters, but unsure where to start, thinking it is something to do with the mappings for these indices.

in the developer console when i'm using query profiler I'm seeing the following and I think it relates....
kibana.bundle.js:3 mapping size is larger than 10MB (16.564489364624023 MB). ignoring..
that is happening even when I specify the field to search.

I found my cluster state is 59 MB... i'm guessing that could also be part of the problem?
What can I do about it?

The request timeout can be bumped up using elasticsearch.requestTimeout in kibana.yml.

Alternatively we need to find a way to limit the search or improve the cluster performance. At the Kibana level - is limiting the number of fields you're searching across an option? Maybe less data (e.g. increasing the beats interval) At the elasticsaerch level, we can dig into number of shards, replicas, and hardware.

Hi @Asa_Zalles-Milner,

Can you provide more cluster details like how many nodes, how many indices and what is the cluster size?

So i did discover something interesting after this post.
It is only in kibana the query times out, but if I do a direct elasticsearch query, it returns in under the timeout.
The use case is "someone doesn't know what field the data is in" and needs to do a quick query to find the data so they can filter down to the index. I find it very odd that it is both metricbeat and filebeat exhibiting the problem, and the two different clusters have different amount of data.

2 different clusters are exhibiting the same behavior, 3 different sets of indices in each cluster exhibit the same problem, all are set to 1 shard with 1 replica
Data nodes are 32gb ram, 16vcpu, 1tb of disk (c5.4xlarge)

1 is 24 data nodes . 929 indices.
2 is 44 data nodes. 910 indices

It doesn't feel like the 30 second timeout is really the issue, it feels like something else is wrong with those indices in being queried by kibana, this is happening even when I just search the two individual index patterns of metricbeat and filebeat.

Okay, interesting. Kibana's query probably isn't a direct match to what's being queried directly . I'd tend to agree that something's wrong ont he kibana side, that sounds like plenty of hardware for querying metricbeat data.

If you get a chance can you share the _msearch query and timestamps in your browsers developer tools and if there's anything surrounding it that looks out of place?

Here is the exact lucene queries being used. i'm mostly looking at filebeat right now, since that is what clued me in, but it is also on the metricbeat index.
does NOT work. env.overall:prod AND "app-name"
does work env.overall:prod AND message: "app-name"

I have 3 different filebeat indices (for different ENV) and all of them are exhibiting this same problem.

I am only looking back at 15 minutes and it is timing out btw.

Here is the request payload I believe. (fished out of the kibana that is coming back and then adjusted to not query message specifically).
{
"version": true,
"size": 1000,
"sort": [
{
"@timestamp": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"_source": {
"excludes":
},
"aggs": {
"2": {
"date_histogram": {
"field": "@timestamp",
"interval": "30s",
"time_zone": "UTC",
"min_doc_count": 1
}
}
},
"stored_fields": [
""
],
"script_fields": {},
"docvalue_fields": [
"@timestamp",
"event.created",
"suricata.eve.flow.end",
"suricata.eve.flow.start",
"suricata.eve.timestamp",
"suricata.eve.tls.notafter",
"suricata.eve.tls.notbefore"
],
"query": {
"bool": {
"must": [
{
"query_string": {
"query": "env.overall:prod AND "container-service-proxy"",
"analyze_wildcard": true,
"default_field": "
"
}
},
{
"range": {
"@timestamp": {
"gte": 1565115340106,
"lte": 1565116240106,
"format": "epoch_millis"
}
}
}
],
"filter": ,
"should": ,
"must_not":
}
},
"highlight": {
"pre_tags": [
"@kibana-highlighted-field@"
],
"post_tags": [
"@/kibana-highlighted-field@"
],
"fields": {
"*": {}
},
"fragment_size": 2147483647
}
}

Do you think that mapping size being greater than 10MB is okay? (i think not, since I can get the result querying directly...) But just asking.

No seeing any sign of an error in the logs or anything like that.

Any Idea?

Yes I would tend to agree the mapping size combined with the full text query AND "container-service-proxy" is adding some work. Does the query work with just env.overall:prod? Can we limit the search for container-service-proxy to a single field?

Otherwise things look mostly okay, nothing jumps out on first glance.

Work just fine.
It is the ONLY "quoted string" searches that fail when a field is not specified.
Since the use case is being able to find something in any field... that is what I'm trying to figure out why is not working.

How would I go about reducing the mapping size?

For me, this is occurring in 6.8.0, in the Management tab for Saved Objects, and Reporting.
It doesn't appear to cause a problem, just an annoyance.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.