It's quite likely that Elasticsearch isn't able to respond to the query. 502 generally means that Elasticsearch failed, and it's a good idea to check the Elasticsearch logs to see what happened. Wildcard queries can be expensive, and on large indexes with many documents, it's possible to cause Elasticsearch to run out of memory.
[2018-01-31T14:43:21,812][DEBUG][o.e.a.s.TransportSearchAction] [client-node-b] [88818] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [data-node-b][10.65.226.154:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [apiVersion] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:336) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:111) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase.hitExecute(DocValueFieldsFetchSubPhase.java:64) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:164) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:426) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:403) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:400) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1533) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.0.jar:5.6.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
[2018-01-31T14:43:21,817][DEBUG][o.e.a.s.TransportSearchAction] [client-node-b] [29180] Failed to execute fetch phase
org.elasticsearch.transport.RemoteTransportException: [data-node-c][192.168.1.100:9300][indices:data/read/search[phase/fetch/id]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [apiVersion] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:336) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:111) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.fetch.subphase.DocValueFieldsFetchSubPhase.hitExecute(DocValueFieldsFetchSubPhase.java:64) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.fetch.FetchPhase.execute(FetchPhase.java:164) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:426) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:403) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:400) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1533) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.6.0.jar:5.6.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_152]
This is the elasticsearch log which we are getting while doing wildcard search.
Could you please provide a guide line to fix this issue?
Ah, so it's not a memory issue, it's an indexing/query issue. The key is this line from the Elasticsearch output:
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [apiVersion] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
The problem is that fielddata in Elasticsearch is disabled by default on text fields, and I suspect you are trying to query a text field. The Reason this is done by default is because searching raw text is slow and expensive (resource-wise).
Instead, you'll want to query on a keyword field, which is analyzed and quickly searchable. You can dual-index your text fields to get this, something I believe Logstash does out of the box, and something that is outlined on that page. Basically, you'll use the fieldname directly to get the unanalyzed version of the value, and fieldname.keyword to search the analyzed version.
Check out the "Before enabling fielddata" section of those docs. You may need to re-index your data, and if so, check out the Reindex API.
Putting * in the query bar is a little redundant, you'll get all records by default anyway. But that should certainly not cause errors. I don't get the same behavior though, at least on my version... what version of the stack are you running?
Hi @Joe_Fleming, While trying to create and index pattern also we are getting the same error and in this case there is no errors are getting displayed in the elasticsearch logs.
Please note that we are trying to visualize cloudtrail logs which having lot of fields.
Error: 502 Response
at https://search.node.com/bundles/kibana.bundle.js?v=15523:27:1911
at processQueue (https://search.node.com/bundles/commons.bundle.js?v=15523:38:23621)
at https://search.node.com/bundles/commons.bundle.js?v=15523:38:23888
at Scope.$eval (https://search.node.com/bundles/commons.bundle.js?v=15523:39:4619)
at Scope.$digest (https://search.node.com/bundles/commons.bundle.js?v=15523:39:2359)
at Scope.$apply (https://search.node.com/bundles/commons.bundle.js?v=15523:39:5037)
at done (https://search.node.com/bundles/commons.bundle.js?v=15523:37:25027)
at completeRequest (https://search.node.com/bundles/commons.bundle.js?v=15523:37:28702)
at XMLHttpRequest.xhr.onload (https://search.node.com/bundles/commons.bundle.js?v=15523:37:29634
If you're not seeing any errors from Elasticsearch, are you seeing any errors from the Kibana server? There should be some useful output somewhere, the error you see in the browser here isn't helpful since it's coming from a problem on the server.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.