I'm currently updating my Elastic Stack from 6.3.2 to 6.7. For some reason, each time I start the kibana service after update, there is a timeout during the original migration.
Apr 18 15:36:36 es1 kibana[18082]: {"type":"log","@timestamp":"2019-04-18T13:36:36Z","tags":["warning"],"pid":18082,"message":"Error loading maps telemetry: Error: Request Timeout after 30000ms"}
Apr 18 15:36:36 es1 kibana[18082]: {"type":"log","@timestamp":"2019-04-18T13:36:36Z","tags":["fatal","root"],"pid":18082,"message":"{ Error: Request Timeout after 30000ms\n at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7)\n at ontimeout (timers.js:436:11)\n at tryOnTimeout (timers.js:300:5)\n at listOnTimeout (timers.js:263:5)\n at Timer.processTimers (timers.js:223:10)\n status: undefined,\n displayName: 'RequestTimeout',\n message: 'Request Timeout after 30000ms',\n body: undefined,\n isBoom: true,\n isServer: true,\n data: null,\n output:\n { statusCode: 503,\n payload:\n { statusCode: 503,\n error: 'Service Unavailable',\n message: 'Request Timeout after 30000ms' },\n headers: {} },\n reformat: [Function],\n [Symbol(SavedObjectsClientError)]: 'SavedObjectsClient/esUnavailable' }"}
Apr 18 15:36:36 es1 kibana[18082]: FATAL Error: Request Timeout after 30000ms
Apr 18 15:36:37 es1 systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Apr 18 15:36:37 es1 systemd[1]: kibana.service: Unit entered failed state.
Apr 18 15:36:37 es1 systemd[1]: kibana.service: Failed with result 'exit-code'.
Apr 18 15:36:37 es1 systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Apr 18 15:36:37 es1 systemd[1]: Stopped Kibana.
Apr 18 15:36:37 es1 systemd[1]: Started Kibana.
Following that when it tries to migrate the kibana index again, it says another Kibana service uses the .kibana_7 index.
Apr 18 15:36:47 es1 kibana[18180]: {"type":"log","@timestamp":"2019-04-18T13:36:47Z","tags":["info","migrations"],"pid":18180,"message":"Creating index .kibana_7."}
Apr 18 15:37:08 es1 kibana[18180]: {"type":"log","@timestamp":"2019-04-18T13:37:08Z","tags":["warning","migrations"],"pid":18180,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_7 and restarting Kibana."}
Is there a way to solve this way, or at least a way for me to do that manually while the kibana service is stoped?