Hello everyone,
i have an elastic instance where i try to set up a custom OpenAI connector to a machine with a locally hosted LLM to use within the AI Assistant. Since 8.17 it is possible to use an OpenAI compatible API connector to accomplish this.
Unfortunately, when i send a prompt from the AI Assistant, i get a rather generic errors in the AI Assistant Chat:
ActionsClientChatOpenAI: an error occurred while running the action - Status code: undefined. Message: Unexpected API Error: ERR_CANCELED - canceled.
ActionsClientChatOpenAI: an error occurred while running the action - Unexpected API Error: - Request was aborted.
I can see the incoming request from Kibana in the logs of lm-studio (the application which hosts the llm). After a short period of time however (after the request from Kibana comes in on the llm-host), a log in lm-studio appears which states "Client disconnected. Stopping generation ..." even before the llm is able to finish generating a response. When sending a prompt from the kibana host to the llm via curl directly to the API endpoint, i get a response back, thus i suspect the issue is related to Elastic/Kibana. The test of the connector succeeds.
Thank you in advance!