Bad Gateway Errors in Discover in Kibana 6.4 on some indices

After upgrading to 6.4 in Elastic Cloud, and using the Discover feature in Kibana, indices are now displaying "Bad Gateway" errors. If the timespan is short and there are no records, the "no records" correctly displays. After expanding the timespan, then the error appears.

User: "superadmin" role

The stacktrace of error:

 Fatal Error
Courier fetch: Unable to connect to the server.
Version: 6.4.0
Build: 17929
Error: Bad Gateway
    at respond (https://SNIP/bundles/vendors.bundle.js:313:149378)
    at checkRespForFailure (https://SNIP/bundles/vendors.bundle.js:313:148589)
    at https://SNIP/bundles/vendors.bundle.js:313:157823
    at processQueue (https://SNIP/bundles/vendors.bundle.js:197:199684)
    at https://SNIP/bundles/vendors.bundle.js:197:200647
    at Scope.$digest (https://SNIP/bundles/vendors.bundle.js:197:210409)
    at Scope.$apply (https://SNIP/bundles/vendors.bundle.js:197:213216)
    at done (https://SNIP/bundles/vendors.bundle.js:197:132715)
    at completeRequest (https://SNIP/bundles/vendors.bundle.js:197:136327)
    at XMLHttpRequest.requestLoaded (https://SNIP/bundles/vendors.bundle.js:197:135223)

I am seeing the same issue in our environment after it got upgraded to 6.4.0 in Elastic cloud. The most curious thing is that not all index patterns are affected. For me, the logs-* works fine but its subset logs-app-* does not.

I am really running out of explanations there. Probably worth mentioning that the broken one used to be set as default for Discover search.

Same problem for me. In my case I use an alias ("auftrag") which contains also indexnames with more then one dash: "auftrag-2017-20180831-092145-2291992359". Is it a problem with indexnames with more then one dash?
(6.4.0 on centos)

It failed even for index without hyphens for us.

I just got response from Elastic Support and they somehow managed to fix the issue from their side. What they did is still a mystery but at least it is possible.

Mine hasn't been fixed yet, unfortunately. Also would be nice if recommendation is to post inquiries here, that someone at Elastic acknowledges and updates status here.

But of course! I did reference this discussion in my support ticket. I also asked what to do next time when upgrading to 6.4. This time, it was just Sandbox cluster for us. But one day Production cluster would need to be upgraded too. :thinking:

1 Like

Some insight of the fix that was applied to our cluster by Elastic Support team (and was found successful in our case):

Your issue is related to a recently discovered bug with no fix committed to a specific Kibana version yet. If you will be migrating data from your old cluster to a new one - before you go to prod, check the status of this issue. If it is fixed by the time you go to prod, deploy on the version with the fix. If it is not fixed yet, then please open a new case and we will make the change necessary to your new cluster before you go live. Please be sure to give us a few days notice.

Hopefully, that could help others who did not start their upgrade yet.

We are also seeing the same issue on our Production instance. I hope there's a fix soon!

Thanks for posting. If you're having trouble, you can contact our Support for help.

https://www.elastic.co/support/welcome/cloud

See the "How do I open a case?" section on that page.

We are running a self-hosted solution with a Basic license, will we still be able to receive support?

If you have a support subscription, yes.

The comment from Chris was for anyone using Elastic Cloud.

This issue has been fixed on Elastic Cloud Service.

This issue manifests only when running Kibana/ES behind a HTTP proxy. As such, this is not specifically a Cloud issue, but since we use a proxy, our users are definitely seeing this.

The underlying issue is that a Kibana _msearch query is generating deprecation warnings in ES, which is returned via the HTTP Warning header. ES does not set a cap for the number of warning headers in a HTTP response. The flood of deprecation messages overwhelms the proxy resulting in a 502.

As I mentioned before, this bug can affect all users running ES/Kibana in the context of a proxy, so we're discussing an ES patch. To mitigate this issue, Cloud has set a limit on the number of HTTP warning messages ES can generate.

1 Like

This is not fixed on Elastic Cloud Service for me yet. I can click Discover tab in Kibana and navigate to several indices and the error as illustrated in screenshot still appears.

Can you create a support ticket with the cluster ID and index name so we can dig into it?

I did and no response from anyone since last week: Case #00257241

Now solved for my instance too. Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.