Kibana migration issues after upgrade to 6.8.6

Hi All-

I recently upgraded from 5.6 to 6.8.6 and had some issues getting Kibana running. In the end I was able to get it running with deleting all the kibana indexes, deleting aliases to .kibana and then starting fresh. After that, I created a new index, pushed the mapping, reindexed my backup to that index and then updated the alias all while Kibana was running. My issue is, each time I try to restart it tries to 'migrate' again and it fails and then I have to do the whole process again.

I haven't been able to figure out exactly what is causing is, would anyone have some troubleshooting ideas to find what is wrong in my index that doesn't seem to be migrating correctly?

I think the general error I'm getting on startup is like this:

{"type":"log","@timestamp":"2020-05-05T23:36:05Z","tags":["info","migrations"],"pid":25025,"message":"Creating index .kibana_9."}
> {"type":"log","@timestamp":"2020-05-05T23:36:11Z","tags":["info","migrations"],"pid":25025,"message":"Migrating kibana_8 saved objects to .kibana_9"}
> {"type":"log","@timestamp":"2020-05-05T23:36:15Z","tags":["fatal","root"],"pid":25025,"message":"{ Error: mapping set to strict, dynamic introduction of [index] within [visualization.kibanaSavedObjectMeta] is not allowed\n    at Object.write (/USUNRTGPAP011/home/rtprod/bin/elk/kibana/kibana-6.8.6-linux-x86_64/src/server/saved_objects/migrations/core/elastic_index.js:110:23)\n  detail:\n   { index:\n      { _index: '.kibana_9',\n        _type: 'doc',\n        _id: 'visualization:3bee6850-cab9-11e7-a674-1fe88047baa6',\n        status: 400,\n        error: [Object] } },\n  isBoom: true,\n  isServer: true,\n  data: null,\n  output:\n   { statusCode: 500,\n     payload:\n      { statusCode: 500,\n        error: 'Internal Server Error',\n        message: 'An internal server error occurred' },\n     headers: {} },\n  reformat: [Function],\n  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/generalError' }"}

Can you do a _cat/indices request and post the result here? (you can hide your personal indices, just interested in the system ones.)

Hi -

Tried to filter out a bunch of our stuff, let me know if that has what you are looking for.

Thanks!

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  967k  100  967k    0     0   487k      0  0:00:01  0:00:01 --:--:--  487k
green open tt-versions                               50Y_k5p9TtCDI_WNa1odKA 5  1         0        0    2.5kb    1.2kb
green open .scripts-reindexed-v5                     QpshHutSTjaO8anynMoWFg 1 63         1        0  167.6kb    2.6kb
green open ma-exec-rec-20180315                      OY7IMs_dQv6d7S-SHmxc7A 5  1         0        0    2.5kb    1.2kb
green open ma-hid                                    sD90gYZ1SWyG-tzqHLHo6A 5  1         0        0    2.5kb    1.2kb
green open kibana_8                                  vleH1ysSQ5e37EubVnSjvg 5  1      1294       57   13.4mb    6.6mb
green open unix-fimacis-2017.nov                     5n0hbdblSSu1PGqGZ4sPVw 1  0   9378894      737    1.6gb    1.6gb
green open kibana_bck                                PtscwDONSjau0Ou4iz8o5Q 5  1      1339        0   11.3mb    5.6mb
green open kibana_bck_after_upg                      lQE16L-WRgCXmN3p-GC9JQ 5  1      1347        0   11.5mb    5.7mb
green open searchguard                               sF5eM_24SeCqCmb8bhJ6jw 1 63         6        0    2.2mb   35.2kb
green open rec-20180319                              ND56n3EOR7-nzw9PX8Pmtw 5  1   1506412        0  486.9mb  243.4mb
green open iongtw_order_template                     IJI3v0DPQ56ujI9OqDWiPw 1  1         0        0     520b     260b
green open recoveryconsole-reindexed-v5              c2jCqufOTeiVduoMpmynsw 5  1         0        0    2.5kb    1.2kb
green open .kibana_1                                 pBRcAN0kSU-UXey5iMTQbQ 1  1         0        0     522b     261b
green open recipe-reindexed-v5                       kRpDU7LcS3ejdtRX1AFuAA 5  1         0        0    2.5kb    1.2kb
green open ma-report-quotationratio                  s3VbBBvQTqOZ3RCEt-h4YQ 1  5     10079        0   12.3mb      2mb
green open wp-admin-reindexed-v5                     SPbb3aXfRv6e87m4TGe5fA 5  1         0        0    2.5kb    1.2kb
green open exec-rec-index                            l4zG5aD0RDOvCK508e3fWQ 5  1   1353631   253515  541.1mb  270.5mb
green open manager-reindexed-v5                      8M2V9eCVQral682Zc-NKNA 5  1         0        0    2.5kb    1.2kb

Hello

Bumping this thread as I am seeing the same issue on kibana restart with my prod cluster after following the same upgrade path

Thanks

I'm trying to figure out why there is a kibana_8 index pattern. Did you change your default kibana index in the kibana.yml?

Hi-

This was just a manual copy/reindex from our Kibana 5.X index that keeps failing to migrate. So now whenever we have to restart Kibana we'll manually remove the ".kibana_x" indexes, remove the alias from .kibana and then start fresh. Kibana creates a .kibana_1 and .kibana aliases to it and then we just change the alias again "kibana_8" that keeps failing migration but works if we change the alias while it's running.

Settings are default I think because its commented out:

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn’t already exist.
#kibana.index: ".kibana"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.