Kibana - Enterprise Search Connector Issue

Hi All, I was trying to migrate my connector configurations over from my testing environment into my live environment. However, every single time i enter the configuration page of my connector, it prompts the error "An unexpected error occurred". When i took a look into my Kibana logs, this was the error shown when i entered the configuration page of my connector:
[2024-05-24T17:43:20.823+08:00][ERROR][plugins.enterpriseSearch] An error occurred while resolving request to https://localhost:5601/internal/enterprise_search/connectors/available_indices?search_query= : Search encountered an error.

Does anyone have an idea of what this error means and how i could troubleshoot it?

Screenshot of the error in Kibana, after this is shown kibana automatically restarts.

Hi @longansoju! Sure, let's take a look at your issue. Could you clarify how you were migrating the connector configurations between environments? Were you simply creating a new connector of the same type? Additionally, what stack version are you using?

Hi! Thanks for the reply.

yeap i created a new connector of the same type (MySQL). I'm currently using v8.13.3

Thanks! Can you provide us with a bit more context. Can you share your steps so that we can reproduce the issue in 8.13.3? Do you have many other running connectors? Do you create the new connector in the same cluster or is it a new cluster?

It's a different cluster, and i don't have any other connectors running.

Steps taken were:

  1. Go to Enterprise Search - Connectors
  2. Create new connector
  3. Select MySQL Connector
  4. Create connector

The moment i entered the configuration page to attach an index, the following error pops up and my kibana restarts.

I looked into my kibana logs and this line appeared everytime i entered the connector's configuration page:

[2024-05-27T19:34:07.886+08:00][ERROR][plugins.enterpriseSearch] An error occurred while resolving request to https://localhost:5601/internal/enterprise_search/connectors/available_indices?search_query=: Search encountered an error.

Other than this, none of my elasticsearch nodes prompted any error logs.

Hmm, that's odd. I tried replicating your issue on a fresh 8.13.3 deployment, but I couldn't. Have you tried removing the connector and recreating it in the UI? Are you running your cluster locally with Docker Compose?

If this doesn't resolve the issue, I suggest cleaning the internal index used for managing connectors. You can do this with the following command:

DELETE .elastic-connectors-v1

The index will be recreated when you create a new connector.

Yeap i've tried removing the connector and recreating it in the UI but it doesnt work.
I'm running it locally but not with docker compose, my cluster just runs on the deb packages installed using apt-get.

I tried cleaning the internal index with "DELETE .elastic-connectors-v1" but when i tried creating a new connector but the connector page loads for quite awhile and eventually prompts this error:

Kibana Logs showed this:

[2024-05-27T20:55:15.280+08:00][ERROR][plugins.enterpriseSearch] An error occurred while resolving request to https://localhost:5601/internal/enterprise_search/stats/sync_jobs?isCrawler=false: Search encountered an error.
[2024-05-27T20:55:15.280+08:00][ERROR][plugins.enterpriseSearch] ResponseError: search_phase_execution_exception
	Root causes:
		no_shard_available_action_exception: null
    at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:479:27)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)
    at Client.SearchApi [as search] (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/search.js:66:12)
    at fetchSyncJobsStats (/usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/lib/stats/get_sync_jobs.js:19:32)
    at /usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/routes/enterprise_search/stats.js:36:18
    at /usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/utils/elasticsearch_error_handler.js:21:14
    at Router.handle (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/src/router.js:171:30)
    at handler (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/src/router.js:113:50)
    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)
    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)
    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)
    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)

when i refreshed my connector page, this showed up:

To add on, when i was finally able to create the connector. Upon entering the configuration page, i was unable to create and attach an index for it.

I also enabled Kibana's DEBUG logs, to compare the logs between my LIVE environment and UAT environment. Hope the follow logs will be able to help us identify the root cause.

LIVE:

2024-05-28T14:50:22.292+08:00][DEBUG][http.server.response] POST /internal/enterprise_search/connectors 200 18618ms - 1.6KB
[2024-05-28T14:50:22.401+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content"}
[2024-05-28T14:50:22.402+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content"}
[2024-05-28T14:50:22.405+08:00][DEBUG][elasticsearch.query.data] 200 - 786.0B
GET /.kibana_security_session/_doc/xmXiIjAciHQBEu0iifn83%2B%2BEIl4I%2FGLR8OQqBp9b5ls%3D
[2024-05-28T14:50:22.405+08:00][DEBUG][elasticsearch.query.data] 200 - 786.0B
GET /.kibana_security_session/_doc/xmXiIjAciHQBEu0iifn83%2B%2BEIl4I%2FGLR8OQqBp9b5ls%3D
[2024-05-28T14:50:22.414+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/connectors/xfz1vY8BtY0-R4gmEQ6P.
[2024-05-28T14:50:22.414+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T14:50:22.415+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/stats.
[2024-05-28T14:50:22.415+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T14:50:22.416+08:00][DEBUG][elasticsearch.query.data] 200 - 355.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T14:50:22.416+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T14:50:22.416+08:00][DEBUG][plugins.security.session.QqBp9b5ls=] Successfully extended existing session.
[2024-05-28T14:50:22.418+08:00][DEBUG][elasticsearch.query.data] 200 - 355.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T14:50:22.418+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T14:50:22.418+08:00][DEBUG][plugins.security.session.QqBp9b5ls=] Successfully extended existing session.
[2024-05-28T14:50:22.426+08:00][DEBUG][elasticsearch.query.data] 200 - 1.6KB
GET /_connector/xfz1vY8BtY0-R4gmEQ6P
[2024-05-28T14:50:22.428+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/connectors/xfz1vY8BtY0-R4gmEQ6P 200 27ms - 1.7KB
[2024-05-28T14:50:22.582+08:00][DEBUG][elasticsearch.query.data] 200 - 992.0B
GET /_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip
[2024-05-28T14:50:22.696+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content"}
[2024-05-28T14:50:22.699+08:00][DEBUG][elasticsearch.query.data] 200 - 786.0B
GET /.kibana_security_session/_doc/xmXiIjAciHQBEu0iifn83%2B%2BEIl4I%2FGLR8OQqBp9b5ls%3D
[2024-05-28T14:50:22.708+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/indices/testing-connector/exists.
[2024-05-28T14:50:22.708+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T14:50:22.709+08:00][DEBUG][elasticsearch.query.data] 200 - 355.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T14:50:22.709+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T14:50:22.709+08:00][DEBUG][plugins.security.session.QqBp9b5ls=] Successfully extended existing session.
[2024-05-28T14:50:22.713+08:00][DEBUG][elasticsearch.query.data] 404 - 437.0B
HEAD /testing-connector
[2024-05-28T14:50:22.715+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/indices/testing-connector/exists 200 19ms - 16.0B
[2024-05-28T14:50:23.112+08:00][DEBUG][elasticsearch.query.data] 200 - 628.0B
POST /.kibana_8.13.3/_update/enterprise_search_telemetry%3Aenterprise_search_telemetry?refresh=wait_for&require_alias=true
{"script":{"source":"\n              for (int i = 0; i < params.counterFieldNames.length; i++) {\n                def counterFieldName = params.counterFieldNames[i];\n                def count = params.counts[i];\n\n                if (ctx._source[params.type][counterFieldName] == null) {\n                  ctx._source[params.type][counterFieldName] = count;\n                }\n                else {\n                  ctx._source[params.type][counterFieldName] += count;\n                }\n              }\n              ctx._source.updated_at = params.time;\n            ","lang":"painless","params":{"counts":[1],"counterFieldNames":["ui_viewed.configuration"],"time":"2024-05-28T06:50:22.419Z","type":"enterprise_search_telemetry"}},"upsert":{"enterprise_search_telemetry":{"ui_viewed.configuration":1},"type":"enterprise_search_telemetry","managed":false,"coreMigrationVersion":"8.8.0","updated_at":"2024-05-28T06:50:22.419Z"},"_source":true}
[2024-05-28T14:50:23.113+08:00][DEBUG][http.server.response] PUT /internal/enterprise_search/stats 200 711ms - 16.0B
[2024-05-28T14:50:23.486+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content"}
[2024-05-28T14:50:23.490+08:00][DEBUG][elasticsearch.query.data] 200 - 786.0B

UAT:

[2024-05-28T15:04:27.147+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content/connectors/WbL6b48BqJ7h7YFefMkW/configuration"}
[2024-05-28T15:04:27.151+08:00][DEBUG][elasticsearch.query.data] 200 - 784.0B
GET /.kibana_security_session/_doc/Ip6vperVyPJXe8F1tjWaCstCR%2FH%2F5JxMzXGzG3SxYPk%3D
[2024-05-28T15:04:27.151+08:00][DEBUG][elasticsearch.query.data] 200 - 784.0B
GET /.kibana_security_session/_doc/Ip6vperVyPJXe8F1tjWaCstCR%2FH%2F5JxMzXGzG3SxYPk%3D
[2024-05-28T15:04:27.161+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/security/me.
[2024-05-28T15:04:27.162+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.163+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/connectors/WbL6b48BqJ7h7YFefMkW.
[2024-05-28T15:04:27.163+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.164+08:00][DEBUG][elasticsearch.query.data] 200 - 784.0B
GET /.kibana_security_session/_doc/Ip6vperVyPJXe8F1tjWaCstCR%2FH%2F5JxMzXGzG3SxYPk%3D
[2024-05-28T15:04:27.164+08:00][DEBUG][elasticsearch.query.data] 200 - 784.0B
GET /.kibana_security_session/_doc/Ip6vperVyPJXe8F1tjWaCstCR%2FH%2F5JxMzXGzG3SxYPk%3D
[2024-05-28T15:04:27.165+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.165+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.167+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.168+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.168+08:00][DEBUG][plugins.security.session.GzG3SxYPk=] Successfully extended existing session.
[2024-05-28T15:04:27.170+08:00][DEBUG][http.server.response] GET /internal/security/me 200 28ms - 505.0B
[2024-05-28T15:04:27.172+08:00][DEBUG][elasticsearch.query.data] 200 - 7.7KB
GET /_connector/WbL6b48BqJ7h7YFefMkW
[2024-05-28T15:04:27.175+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/stats.
[2024-05-28T15:04:27.175+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.179+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.180+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.181+08:00][DEBUG][plugins.security.session.GzG3SxYPk=] Successfully extended existing session.
[2024-05-28T15:04:27.184+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/connectors/WbL6b48BqJ7h7YFefMkW 200 40ms - 7.6KB
[2024-05-28T15:04:27.185+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/api_keys.
[2024-05-28T15:04:27.185+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.206+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.207+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.207+08:00][DEBUG][plugins.security.session.GzG3SxYPk=] Successfully extended existing session.
[2024-05-28T15:04:27.211+08:00][DEBUG][elasticsearch.query.data] 200 - 2.6KB
GET /_security/api_key?username=wongjinghan%40sea.com
[redacted]
[2024-05-28T15:04:27.213+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/api_keys 200 66ms - 2.1KB
[2024-05-28T15:04:27.328+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content/connectors/WbL6b48BqJ7h7YFefMkW/configuration"}
[2024-05-28T15:04:27.331+08:00][DEBUG][elasticsearch.query.data] 200 - 784.0B
GET /.kibana_security_session/_doc/Ip6vperVyPJXe8F1tjWaCstCR%2FH%2F5JxMzXGzG3SxYPk%3D
[2024-05-28T15:04:27.342+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/indices/testing-connector.
[2024-05-28T15:04:27.342+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.344+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.344+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.344+08:00][DEBUG][plugins.security.session.GzG3SxYPk=] Successfully extended existing session.
[2024-05-28T15:04:27.349+08:00][DEBUG][elasticsearch.query.data] 200
GET /testing-connector
[2024-05-28T15:04:27.352+08:00][DEBUG][elasticsearch.query.data] 200
GET /testing-connector/_stats
[2024-05-28T15:04:27.354+08:00][DEBUG][elasticsearch.query.data] 200 - 76.0B
GET /testing-connector/_count
[2024-05-28T15:04:27.356+08:00][DEBUG][elasticsearch.query.data] 200 - 7.7KB
GET /_connector?index_name=testing-connector
[2024-05-28T15:04:27.358+08:00][DEBUG][elasticsearch.query.data] 200
POST /.elastic-connectors-sync-jobs/_search
{"query":{"bool":{"filter":[{"term":{"connector.id":"WbL6b48BqJ7h7YFefMkW"}},{"dis_max":{"queries":[{"term":{"status":"in_progress"}},{"term":{"status":"pending"}}]}}]}}}
[2024-05-28T15:04:27.360+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/indices/testing-connector 200 31ms - 7.9KB
[2024-05-28T15:04:27.377+08:00][DEBUG][elasticsearch.query.data] 200 - 934.0B
POST /.kibana_8.13.0/_update/enterprise_search_telemetry%3Aenterprise_search_telemetry?refresh=wait_for&require_alias=true
{"script":{"source":"\n              for (int i = 0; i < params.counterFieldNames.length; i++) {\n                def counterFieldName = params.counterFieldNames[i];\n                def count = params.counts[i];\n\n                if (ctx._source[params.type][counterFieldName] == null) {\n                  ctx._source[params.type][counterFieldName] = count;\n                }\n                else {\n                  ctx._source[params.type][counterFieldName] += count;\n                }\n              }\n              ctx._source.updated_at = params.time;\n            ","lang":"painless","params":{"counts":[1],"counterFieldNames":["ui_viewed.configuration"],"time":"2024-05-28T07:04:27.183Z","type":"enterprise_search_telemetry"}},"upsert":{"enterprise_search_telemetry":{"ui_viewed.configuration":1},"type":"enterprise_search_telemetry","managed":false,"coreMigrationVersion":"8.8.0","updated_at":"2024-05-28T07:04:27.183Z"},"_source":true}
[2024-05-28T15:04:27.379+08:00][DEBUG][http.server.response] PUT /internal/enterprise_search/stats 200 233ms - 16.0B
[2024-05-28T15:04:28.300+08:00][DEBUG][execution_context] {"type":"application","name":"enterpriseSearchContent","url":"/app/enterprise_search/content/connectors/WbL6b48BqJ7h7YFefMkW/configuration"}

The main difference i noticed was that in LIVE, it sent a request to "/internal/enterprise_search/indices/testing-connector/exists" and eventually reached a 404 output and proceeds to HEAD the non-existent index:

[2024-05-28T14:50:22.708+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/indices/testing-connector/exists.
[2024-05-28T14:50:22.708+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T14:50:22.709+08:00][DEBUG][elasticsearch.query.data] 200 - 355.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T14:50:22.709+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T14:50:22.709+08:00][DEBUG][plugins.security.session.QqBp9b5ls=] Successfully extended existing session.
[2024-05-28T14:50:22.713+08:00][DEBUG][elasticsearch.query.data] 404 - 437.0B
HEAD /itcenter-user-full

Whereas in UAT, it sends a request to "/internal/enterprise_search/indices/testing-connector" without the /exists, and proceeds to get the index information:

[2024-05-28T15:04:27.342+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate user request to /internal/enterprise_search/indices/conn-itcenter.user-full.
[2024-05-28T15:04:27.342+08:00][DEBUG][plugins.security.basic.basic] Trying to authenticate via state.
[2024-05-28T15:04:27.344+08:00][DEBUG][elasticsearch.query.data] 200 - 356.0B
GET /_security/_authenticate
[redacted]
[2024-05-28T15:04:27.344+08:00][DEBUG][plugins.security.basic.basic] Request has been authenticated via state.
[2024-05-28T15:04:27.344+08:00][DEBUG][plugins.security.session.GzG3SxYPk=] Successfully extended existing session.
[2024-05-28T15:04:27.349+08:00][DEBUG][elasticsearch.query.data] 200
GET /conn-itcenter.user-full
[2024-05-28T15:04:27.352+08:00][DEBUG][elasticsearch.query.data] 200
GET /conn-itcenter.user-full/_stats
[2024-05-28T15:04:27.354+08:00][DEBUG][elasticsearch.query.data] 200 - 76.0B
GET /conn-itcenter.user-full/_count
[2024-05-28T15:04:27.356+08:00][DEBUG][elasticsearch.query.data] 200 - 7.7KB
GET /_connector?index_name=conn-itcenter.user-full
[2024-05-28T15:04:27.358+08:00][DEBUG][elasticsearch.query.data] 200
POST /.elastic-connectors-sync-jobs/_search
{"query":{"bool":{"filter":[{"term":{"connector.id":"WbL6b48BqJ7h7YFefMkW"}},{"dis_max":{"queries":[{"term":{"status":"in_progress"}},{"term":{"status":"pending"}}]}}]}}}

Hey! Are Live and UAT run the same stack version? Do you have indices named similarly to your connector (it shouldn't make a difference but worth asking).

Also, do you have access to more debug logs that might indicate what's wrong around the time we hit this error?

https://localhost:5601/internal/enterprise_search/connectors/available_indices?search_query= : Search encountered an error.

This is the part that I'm drawn towards. Is your Elasticsearch healthy? This looks like it's saying that Elasticsearch is having shard failures under the hood. Are there any errors in your Elasticsearch logs?

I think this error no_shard_available_action_exception: null might have been caused as we purged the .elastic-connectors-v1 index in the previous step (my suggestion :sweat_smile:). And this was causing the .../stats/sync_jobs call to fail.

To add on, when i was finally able to create the connector.

I see that @longansoju managed to get a connector created again (so the above issue is gone), but they still seem to face the original (correct me if I'm wrong) issue, so I'm curious to know if we have more logs about it.

Hi, to answer your questions:

  1. My LIVE and UAT are running different stack versions. LIVE is running 8.13.3 while UAT is running 8.13.0. Previously when my LIVE was running 8.13.0, I faced the same issue.

  2. Here are more logs that showed up right before the kibana restarted:

[2024-05-28T22:12:02.166+08:00][DEBUG][elasticsearch.query.data] 404 - 437.0B
HEAD /itcenter-user-full
[2024-05-28T22:12:02.167+08:00][DEBUG][http.server.response] GET /internal/enterprise_search/indices/itcenter-user-full/exists 200 21ms - 16.0B
[2024-05-28T22:12:03.314+08:00][DEBUG][elasticsearch.query.data] 200
GET /*?expand_wildcards=open&features=aliases%2Csettings&filter_path=*.aliases%2C*.settings.index.hidden%2C*.settings.index.verified_before_close
[2024-05-28T22:12:03.316+08:00][DEBUG][elasticsearch.deprecation] Elasticsearch deprecation: 299 Elasticsearch-8.13.3-617f7b76c4ebcb5a7f1e70d409a99c437c896aea "this request accesses system indices: [.security-7, .transform-internal-007], but in a future major version, direct access to system indices will be prevented by default"
Origin:kibana
Stack trace:
    at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)
    at Indices.get (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/indices.js:564:16)
    at getUnattachedIndexData (/usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/lib/indices/utils/get_index_data.js:88:27)
    at fetchUnattachedIndices (/usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/lib/indices/fetch_unattached_indices.js:21:7)
    at /usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/routes/enterprise_search/connectors.js:688:9
    at /usr/share/kibana/node_modules/@kbn/enterprise-search-plugin/server/utils/elasticsearch_error_handler.js:21:14
    at Router.handle (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/src/router.js:171:30)
    at handler (/usr/share/kibana/node_modules/@kbn/core-http-router-server-internal/src/router.js:113:50)
    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)
    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)
    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)
    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)
    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)
Query:
200
GET /*?expand_wildcards=open&features=aliases%2Csettings&filter_path=*.aliases%2C*.settings.index.hidden%2C*.settings.index.verified_before_close
[2024-05-28T22:12:03.335+08:00][DEBUG][elasticsearch.query.data] 400 - 219.0B
GET /_connector?index_name=.monitoring-es-7-2024.05.27%2C.monitoring-es-7-2024.05.28-"APPENDED BY ALL EXISTING INDICES"
[too_long_http_line_exception]: An HTTP line is larger than 4096 bytes.
[2024-05-28T22:12:08.238+08:00][INFO ][root] Kibana is starting

Is there a limit to the number of indices elasticsearch can hold? My LIVE environment has roughly 8000 indices. My senior dev also suspected that it might be due to our cluster health, but i don't think it should be a reason why my connector's configuration page crashes kibana since I am still able to manually create indices.

Great, I think we've identified the root cause! It's a bug on our side. Our validation logic for the index attachment step calls the /internal/enterprise_search/connectors/available_indices endpoint. This endpoint attempts to determine existing index names that can be attached to the connector and, at one point, it calls:

GET /_connector?index_name=<....a lot of indices>
[too_long_http_line_exception]: An HTTP line is larger than 4096 bytes.

With so many indices passed as the index_name argument, the GET request becomes too large and always fails. We will prioritize fixing this issue.

In the meantime, since you already have a connector set up in your UAT environment and are familiar with how connectors work, it should be easy to follow the Connector API tutorial. All can be executed in the Kibana dev console. These steps should allow you to set up the connector while bypassing the failing validation step in the Kibana UI. If you have some issues following the tutorial, feel free to reach out to us on community slack.

Thank you for digging into the logs with us! :+1:

Thanks so much for the help @Jedr_Blaszyk , is there anyway I could get notified when the fix is released?

And is there no other way to increase the long line exception such as through as configuration on kibana.yml?

Yeap, i will try using the Connector API Tutorial, but just one question, on the Kibana UI, when a MySQL connector is created, there is a button to click to generate the API Key. Will calling the "POST /_security/api_key" API replicate this?

is there anyway I could get notified when the fix is released?

We have private tracking issue for that, I left a note to update this thread once we merge the fix.

And is there no other way to increase the long line exception such as through as configuration on kibana.yml?

Unfortunately no, as we are hitting HTTP GET request length limit here.

there is a button to click to generate the API Key. Will calling the "POST /_security/api_key" API replicate this?

Yes, exactly! The encoded value from the response body is what you need to use in your config.yml.

Alright. Thanks! Will try and use the Connector API. Will reach out again if i face any issues.

Thank you so much for the help!