I'm on 6.7 and this issue seems to follow me since I made the update.
After each newstart of the kibana service, it tries to migrate the .kibana index from .index_ to .index_<n+1>. This is odd, and it would have not been noticed if I hadn't made so much maintenance on the machine those past few weeks. But this wouldn't be as annoying if it didn't timed-out and provoking a restart of the service due to fatal error, each time, as I had right after the upgrade. Because it times-out and the service becomes a new PID, I presume the migration is locked under the old PID and it can't go on, throwing me an exception like the following:
Apr 18 15:37:08 es1 kibana[18180]: {"type":"log","@timestamp":"2019-04-18T13:37:08Z","tags":["warning","migrations"],"pid":18180,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_7 and restarting Kibana."}
The worse is, even if I delete the old index and makes an alias to the new index created by the migration process, I get now:
Index .kibana_10 belongs to a version of Kibana that cannot be automatically migrated. Reset it or use the X-Pack upgrade assistant
I tried to delete all .kibana index, so the index would be generated from scratch but I comes back to the original problem:
May 17 14:19:14 es1 kibana[9612]: {"type":"log","@timestamp":"2019-05-17T12:19:14Z","tags":["info","migrations"],"pid":9612,"message":"Creating index .kibana_1."}
May 17 14:19:44 es1 kibana[9612]: {"type":"log","@timestamp":"2019-05-17T12:19:44Z","tags":["status","plugin:spaces@6.7.1","error"],"pid":9612,"state":"red","message":"Status changed from yellow to red - Request Timeout after 30000ms","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
May 17 14:19:44 es1 kibana[9612]: {"type":"log","@timestamp":"2019-05-17T12:19:44Z","tags":["warning"],"pid":9612,"message":"Error loading maps telemetry: Error: Request Timeout after 30000ms"}
May 17 14:19:44 es1 kibana[9612]: {"type":"log","@timestamp":"2019-05-17T12:19:44Z","tags":["fatal","root"],"pid":9612,"message":"{ Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15\n at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n status: undefined,\n displayName: 'RequestTimeout',\n message: 'Request Timeout after 30000ms',\n body: undefined,\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { statusCode: 503,\n error: 'Service Unavailable',\n message: 'Request Timeout after 30000ms' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }"}
May 17 14:19:44 es1 kibana[9612]: FATAL Error: Request Timeout after 30000ms
May 17 14:19:44 es1 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
May 17 14:19:44 es1 systemd[1]: kibana.service: Unit entered failed state.
May 17 14:19:44 es1 systemd[1]: kibana.service: Failed with result 'exit-code'.
May 17 14:19:45 es1 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
May 17 14:19:45 es1 systemd[1]: Stopped Kibana.
May 17 14:19:45 es1 systemd[1]: Started Kibana.
And then when it tries to create the index again, it tells me anothe Kibana migrate the index already. Again.
Isn't there a way to see where the index creation/migration is stuck? It can't be performance anymore, as I made some changes so elasticsearch could have access to more RAM and I already see the effect.
Maybe there is a way to change the timeout, maybe give it 1 or 2 minutes?
Is it a normal behaviour anyway, that it tries to migrate the existing index at each service restart?