Courier fetch error when creating index pattern (5.5.1)

Hello,

We have successfully deployed ELK 5.5.1 stack to our GCE cluster. Elasticsearch seems to run fine and is able to get the data through Logstash (indexes are created and populated correctly; cluster state is green).

However, Kibana is not able to create the index pattern with default settings. After pressing the Create button in Configure an index pattern view, there is an XHR request to _mget endpoint which fails with 400 Bad Request error, and the following fatal error message is shown:

Courier Fetch Error: unhandled courier request error: [action_request_validation_exception] Validation Failed: 1: id is missing for doc 0;

Version: 5.5.3
Build: 15460

Error: unhandled courier request error: [action_request_validation_exception] Validation Failed: 1: id is missing for doc 0;
handleError@https://host/bundles/kibana.bundle.js?v=15460:229:17545
(...)

Other related information:

  • X-Pack plugin is disabled.
  • The same error occurs for Kibana versions 5.5.1 up to 5.5.3.
  • Kibana is hosted from a subpath by a reverse proxy. Requests are rewritten correctly as /subpath// in the logs.

Please let me know if I should provide more details.
Thanks!

XHR requests leading to exception

POST /es_admin/.kibana/index-pattern/logstash-*/_create
{"title":"logstash-*","timeFieldName":"@timestamp","notExpandable":true}

200 OK
ok
POST /es_admin/_mget
{"docs":[{"_index":".kibana","_type":"index-pattern"}]}

400 Bad Request
{"error":{"root_cause":[{"type":"action_request_validation_exception","reason":"Validation Failed: 1: id is missing for doc 0;"}],"type":"action_request_validation_exception","reason":"Validation Failed: 1: id is missing for doc 0;"},"status":400}

Hi Maciej, thanks for sharing such great details about your issue! This is very odd, because the error is basically saying that the _id property is expected and Kibana should be sending that along when you create an index pattern. Could you share a screenshot of how you're configuring your index pattern in the "Configure an index pattern" view?

Thanks,
CJ

Sure! Basically, the default parameters are used.

I forgot to mention; we also tried creating the index pattern by hand with the following queries:

POST .kibana/index-pattern/logstash-*/_create
{"title":"logstash-*","timeFieldName":"@timestamp"}

POST .kibana/index-pattern/logstash-*/_create
{"title":"logstash-*","timeFieldName":"@timestamp","notExpandable":true}

However, both lead to a distorted user interface (while _cat/health reports 100% active shards):

and the following error is present in the console:

Error: indexPattern.fields.byName is undefined
isSortable@https://__host__/__subpath__/bundles/kibana.bundle.js?v=15460:250:4092
(...)

so we reverted this with:

DELETE .kibana/index-pattern/logstash-*

Hi @mg6,

Hmm. I'm not quite sure what's going on, but can you get yourself back into the broken environment and show me the results of:

POST .kibana/index-pattern/_search
{
}

Feel free to *** out any sensitive data if necessary.

Thanks

Hello @chrisronline,

This is the output in distorted UI state:

{
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "hits": {
    "total": 1,
    "max_score": 1,
    "hits": [
      {
        "_index": ".kibana",
        "_type": "index-pattern",
        "_id": "logstash-*",
        "_score": 1,
        "_source": {
          "title": "logstash-*",
          "timeFieldName": "@timestamp",
          "notExpandable": true
        }
      }
    ]
  }
}

Regards

Thanks @mg6

Below is an animated gif trying to reproduce this problem. I understand why the latter error occurs (Error: indexPattern.fields.byName is undefined) but I don't understand why Kibana isn't auto-fixing itself. In the gif below, you'll see that I'm attempting to follow the same steps as you, but after I load the discovery area, you'll see a series of network requests that fixes the index pattern, by populating it with the appropriate fields from the indices in ES.

Can you compare what you see when you follow these steps to what is in the gif and let me know what's different?

Thanks!

Hello Chris,

Thank you for your step-by-step demonstration – it enabled me to finally locate the root cause of the problem and fix the issue. As it turned out, Kibana was correctly trying to fix itself – the problem was incorrect reverse proxy setup.

While checking subsequent XHR requests against our ELK setup, I noticed that the POST logstash-* request to the end of your flow appeared as text type instead of xhr, with the response body of ok.

Now, this looks exactly the same as a successful response from Logstash when sending in some log events.

Our reverse proxy setup consists currently of 2 endpoints handled by nginx-ingress controller [1] described with the following Kubernetes ingress config:

spec:
  rules:
  - host: domain
    http:
      paths:
      - backend:
          serviceName: logstash
          servicePort: 5000
        path: /log
      - backend:
          serviceName: kibana
          servicePort: 5601
        path: /path/to/dashboard/

This translates to the following nginx configuration:

location ~* /log {
  # proxy settings ...

  rewrite /log/(.*) /$1 break;
  rewrite /log / break;
  proxy_pass http://default-logstash-5000;
}

location ~* /path/to/dashboard/ {
  # proxy settings ...

  rewrite /path/to/dashboard/(.*) /$1 break;
  rewrite /path/to/dashboard/ / break;
  proxy_pass http://default-kibana-5601;
}

The exact problem is that location ~* /log directive is a partial match, which succeeds for:

POST /path/to/dashboard/es_admin/.kibana/index-pattern/logstash-*

disturbing the proper request flow, and in effect distorting the user interface.

Available solutions are:

  1. making sure location ~* /path/to/dashboard/ match is performed first - however I am not aware of any possibility of enforcing ingress rules' order in nginx.conf file, or
  2. using /log$ path instead of /log in ingress settings to enforce suffix match for Logstash - still problematic, as it will allow /anything/here/zzlog - but solves the issue, or
  3. enforing exact path match for Logstash with nginx's location = /log directive - this is the proper solution but as of Nov 14, 2017, nginx-ingress controller does not support this feature; there is an open pull request that implements it.

Thank you @chrisronline and @cjcenizal for your help!

[1] gcr.io/google_containers/nginx-ingress-controller:0.8.3

@mg6

Great to hear! Very nice debugging there. Reverse proxies are always a tricky thing but I'm glad you were able to resolve the issue.

Awesome! Glad you could sort it out! Nice work @chrisronline. :smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.