Unable to write index pattern

Hi, all.

We've updated our stack to version 6.4.0 and we're now seeing a weird new issue with Kibana's Discover app.

Steps to reproduce:

  1. Open Discover app, open any index (the document count doesn't seem to matter).
  2. Start adding fields from the "Available fields" panel.
  3. If you're not pausing for 2-3 seconds after every field then the following error is displayed via a standard Kibana popup: "Unable to write index pattern! Refresh the page to get the most up to date changes for this index pattern."
  4. The field appears in the main table, there are no further issues with it.

When the error is displayed, this message is written to the browser console:
"Possibly unhandled rejection: {"res":{},"body":{"message":"[doc][index-pattern:waslogs-*]: version conflict, current version [3162] is different than the one provided [3161]: [version_conflict_engine_exception] [doc][index-pattern:waslogs-*]: version conflict, current version [3162] is different than the one provided [3161], with { index_uuid=\"5-txIHCOSTGiHlQyo4Dz0A\" & shard=\"0\" & index=\".kibana-6\" }","statusCode":409,"error":"Conflict"}}"

Here is the full browser console output dump.

And here is what Kibana writes to its own log:

Aug 27 13:28:05 <hostname> kibana[2214]: {"type":"response","@timestamp":"2018-08-27T10:28:05Z","tags":[],"pid":2214,"method":"put","statusCode":409,"req":{"url":"/api/saved_objects/index-pattern/waslogs-*","method":"put","headers":{"host":"kibana","connection":"close","content-length":"10145","origin":"http://<Nginx balancer>","kbn-version":"6.4.0","user-agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://<Nginx balancer>/app/kibana","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9"},"remoteAddress":"<Nginx balancer>","userAgent":"<Nginx balancer>","referer":"http://<Nginx balancer>/app/kibana"},"res":{"statusCode":409,"responseTime":198,"contentLength":9},"message":"PUT /api/saved_objects/index-pattern/waslogs-* 409 198ms - 9.0B"}

It receives an HTTP 409 Conflict response from Elasticsearch. I see the same status code in the Nginx load balancer access log.

Elasticsearch is completely silent when the error occurs.

I was able to reproduce this with a clean Kibana install (no plugins, latest 6.4.0 tarball from elastic.co) and a direct connection from the Kibana instance to one of our Elasticsearch nodes.
Recreating the index pattern didn't help.

We've never seen this before. It doesn't appear to affect anything at all but the error might seem somewhat confusing to our users.

Is anybody else getting the same errors after upgrading to 6.4.0?

So I've tried creating a new Kibana index from the default template that ships with it, reindexing our current Kibana index and switching one of our Kibana instances to the new one. This did not produce any positive or negative results.

Hi @KBuev, could you post this as a bug on our Github repo? It sounds like the updates to your index pattern are taking an exceptionally long time and as a result you're getting version conflicts because the second update is sent before the first is complete.

On the positive side, the only update we're making when you click those "Add" buttons is an increment of the field's popularity, which isn't particularly important or necessary for Kibana to function. However, I'd still like to look into this more and we can follow up on the Github ticket.

Yep, here it is.

We don't see any indexing latency spikes and stats from the Kibana index confirm this. But at the same time I now see that POST requests that update the index pattern are indeed taking from 0.5 to 1 second to complete.

So this might actually be an issue with some of the devices or software between Kibana and Elasticsearch.
I'll look into that, thank you for pointing me in the right direction.

1 Like

I've added some details to the Github issue.

I did some research.

So the culprit here is obviously the document update latency.
I've made a copy of the Kibana index with a single shard and no replicas and captured some sysdig data as well as a JFR from the JVM that runs the ES node that holds the shard.
It turns out, the index pattern update can take anywhere from 100ms to 5s. There is no evidence of any bottlenecks at the JVM or OS level: no excessive locking, no long GC pauses, no IO or network latency, no weird NUMA behaviour, etc.
All other indexing operations in the cluster and on this particular node perform well.

I've noticed that all the index pattern update requests generated by Kibana have the refresh parameter set to wait_for . According to the docs, this means that the request will wait for the next index refresh. So if the index refresh interval is large enough (5 seconds for the old index) and the update request lands just after a refresh, the index pattern update may take a long time. Even the default refresh interval of 1 second causes problems here if the index pattern is being updated frequently enough.

Here you can see the difference between two requests updating the same doc with refresh set to wait_for and true :

$ curl -s -w "%{time_total}" -o out -XPOST -H "content-type: application/json" "http://elasticsearch:9200/.kibana-6/doc/index-pattern%3Aa2e363a0-a9eb-11e8-8079-53a65e9a939b/_update?version=39&refresh=wait_for" -d@test.json
1.273

...
modify test.json so that the update is not a noop, increment version , set refresh to true
...

$ curl -s -w "%{time_total}" -o out -XPOST -H "content-type: application/json" "http://elasticsearch:9200/.kibana-6/doc/index-pattern%3Aa2e363a0-a9eb-11e8-8079-53a65e9a939b/_update?version=40&refresh=true" -d@test.json
0.025

Wouldn't it make sense to force a refresh every time the index pattern is updated, given that the Kibana index is usually very small?

I've implemented a workaround for now: an if clause for this particular case that resets the $args variable in a separate Nginx location:

location ^~ /.kibana-v6 {
  if ($args ~ ^version=([0-9]+)&refresh=wait_for$) {
    set $args version=$1&refresh=true;
  }
  ...
  proxy_pass, etc.
}

This solved the problem for us: the index pattern update requests don't stall now and there are no more errors when a user is adding multiple fields from the side panel.

I'm seeing this same issue. Came to a head last night when we needed to refresh the field list in Kibana and it kept timing out.

This is the discussion I started on that: Cannot refresh field list

We do use the free version of SearchGuard, not sure if that is contributing in any way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.