HTTP500 on logging into Kibana

Hello all,

I have just come up against an issue with logging into Kibana following a rebuild of our new PoC 7.2.0 stack. I have done some basic investigation but the errors in the logs do not appear to be very helpful in identifying the issue.

The stack is a 3 node elastic cluster fronted with an AWS L/B which kibana uses to communicate with the cluster through. We also have two logstash nodes which act as the sole frontend for event enrichment and ingestion into the cluster.

We have native realm and also AD realms configured on elasticsearch and I have confirmed via curl that the native users (of which kibana is one) that the auth at the elasticsearch level is working as expected. I have also confirmed that the logstash nodes are ingesting data as expected and our indices are growing with log data from logstash.

The issue is when attempting to login to kibana (with either a native or AD user) I get the Oops! Error. Try again. message and the following in the journal:

Oct 02 10:44:56 kibana-infratest-platform kibana[32123]: {"type":"error","@timestamp":"2019-10-02T09:44:56Z","tags":[],"pid":32123,"level":"error","error":{"message":"Cannot destructure property `updated_at` of 'undefined' or 'null'.","name":"TypeError","stack":"TypeError: Cannot destructure property `updated_at` of 'undefined' or 'null'.\n    at SavedObjectsRepository.get (/usr/share/kibana/src/legacy/server/saved_objects/service/lib/repository.js:567:18)\n    at process._tickCallback (internal/process/next_tick.js:68:7)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/security/v1/login","path":"/api/security/v1/login","href":"/api/security/v1/login"},"message":"Cannot destructure property `updated_at` of 'undefined' or 'null'."}
Oct 02 10:44:56 kibana-infratest-platform kibana[32123]: {"type":"response","@timestamp":"2019-10-02T09:44:56Z","tags":[],"pid":32123,"method":"post","statusCode":500,"req":{"url":"/api/security/v1/login","method":"post","headers":{"host":"127.0.0.1:5601","connection":"close","content-length":"77","user-agent":"Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0","accept":"application/json, text/plain, */*","accept-language":"en-GB,en;q=0.5","accept-encoding":"gzip, deflate, br","referer":"https://kibana-infratest-platform.nonprod.eu-west-2.aws.*******/login?next=%2F","content-type":"application/json;charset=utf-8","kbn-version":"7.2.0"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1","referer":"https://kibana-infratest-platform.nonprod.eu-west-2.aws.********/login?next=%2F"},"res":{"statusCode":500,"responseTime":22,"contentLength":9},"message":"POST /api/security/v1/login 500 22ms - 9.0B"}

I have done a few netstats and kibana is keeping multiple established connections to the elasticsearch cluster so don't believe networking to be at fault. I did also find another post here which pointed towards file ownership issues which may cause this. However after chowning the "optimize" directory and restarting kibana I still see the same issue.

This issue has only manifested since rebuilding the cluster (terraform/ansible) and we haven't changed the ansible for kibana at all. We have been updating logstash and elasticsearch ansible to allow it to manage ILM and thought this may have been to do with an index template that may be missing? However even reverting the templates to the format of a working cluster still does not resolve.

Has anyone else seen this before? Is there any additional areas / log files / config I can check to try and fix?

Edit:
Just queried the /api/status for kibana and found the spaces plugin is red

{
        "id": "plugin:spaces@7.2.0",
        "state": "red",
        "icon": "danger",
        "message": "Cannot destructure property `updated_at` of 'undefined' or 'null'.",
        "uiColor": "danger",
        "since": "2019-10-02T09:58:52.549Z"
      },

Thanks,
Ben

All,

We have managed to resolve this issue. It was the index templates that we were applying that disabled the source field for all indices (which included the .kibana indices).

To test we updated the index mapping to enable the _source field and then deleted any index .kibana* then restarted kibana so it recreated its indices. Once this was done login was fine again.

for a long term fix we will change the order and patterns of our index templates to be more specific to the indices that we are using for log data so that _source fields will remain enabled for the kibana indices

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.