I am trying to configure the MongoDB Connector in Enterprise Search to see if we can pull MongoDB collections into Elasticsearch Indices. I have Elasticsearch 8.6.0, Kibana 8.6.0, Enterprise Search 8.6.0, and MongoDB 6.0.1 setup and running in a Docker swarm. Enterprise Search seems to be working and and I can configure the ES index and setup the MongoDB connector, but when I try to sync it just says "Waiting for a connector to pickup" and the sync just spins.
When I look at Enterprise Search > Content > Indices in Kibana, I have 1 "Incomplete ingest methods". If I delete the index using the Connector ingest method, that goes to 0, so obviously my MongoDB connector is incomplete.
The index shows as "Configured", but not Connected.
I have no problem connecting to MongoDB and reading from this database/collection using the same credentials from another client such as Robo3T or mongoshell.
Can you please check the Enterprise Search logs to look for potential problems in the connection from the Connector point of view?
Can you try to log in your Enterprise Search docker container and check connectivity to the MongoDB host? Docker networking can be troublesome to set up, and it's possible that Enterprise Search Docker container is not able to access the MongoDB container.
Thanks for the response.
I am not sure what log to look in really.
Here is /var/log/enterprise-search/connectors.log
enterprise-search@d191f0147689:/var/log/enterprise-search$ more connectors.log
[2023-01-11T11:02:29.690+00:00][7][4004][connectors][INFO]: Starting to process jobs.
/var/log/enterprise-search/app-server.log just keeps looping messages like
enterprise-search@d191f0147689:/var/log/enterprise-search$ tail app-server.log
[2023-01-11T20:04:49.326+00:00][7][17972][cron-Work::Cron::UpdateSearchRelevanceSuggestions][INFO]: Done performing task: UpdateSearchRelevanceSuggestions
[2023-01-11T20:04:49.326+00:00][7][17972][app-server][INFO]: Done running task: UpdateSearchRelevanceSuggestions
[2023-01-11T20:04:57.210+00:00][7][12296][app-server][INFO]: [5b6b4cf5-139f-48f7-b304-56df6727b7cb] Started HEAD "/" for 127.0.0.1 at 2023-01-11 20:04:57 +0000
[2023-01-11T20:04:57.213+00:00][7][12296][action_controller][INFO]: Processing by SharedTogo::HomeController#index as */*
[2023-01-11T20:04:57.215+00:00][7][12296][action_controller][INFO]: Redirected to http://10.128.8.143:3002/ws/search
[2023-01-11T20:04:57.218+00:00][7][12296][action_controller][INFO]: Completed 302 Found in 4ms
[2023-01-11T20:05:07.498+00:00][7][5936][app-server][INFO]: [ff0ea35c-8738-4218-9e3e-e0f74d141108] Started HEAD "/" for 127.0.0.1 at 2023-01-11 20:05:07 +0000
[2023-01-11T20:05:07.501+00:00][7][5936][action_controller][INFO]: Processing by SharedTogo::HomeController#index as */*
[2023-01-11T20:05:07.502+00:00][7][5936][action_controller][INFO]: Redirected to http://10.128.8.143:3002/ws/search
[2023-01-11T20:05:07.505+00:00][7][5936][action_controller][INFO]: Completed 302 Found in 3ms
None of which gave me any clue why it might not be connecting to Mongo.
I installed mongo shell (mongosh) in the container and I am able to successfully connect to my MongoDB from the container using the same credentials I configured in the connector.
enterprise-search@d191f0147689:/tmp/mongosh-1.6.2-linux-x64/bin$ ./mongosh mongodb://admin:XXXXXXXXXXXXXXX@mongodb:27017/emxxxxx
Current Mongosh Log ID: 63bf19ace8c3be5210220c8d
Connecting to: mongodb://<credentials>@mongodb:27017/emxxxxx?directConnection=true&appName=mongosh+1.6.2
Using MongoDB: 6.0.1
Using Mongosh: 1.6.2
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.
rs0 [direct: primary] emxxxxx>
hey @m.hanna , thanks for checking the connectivity!
I'm afraid that using Docker for running Enterprise Search does not include the native connectors needed to use the MongoDB connector. You will need to:
Install it separately as part of your deployment. Please check this repository for instructions.
A Platinum subscription for running it, or a Trial to check it suits your needs.
Alternatively, use Elastic Cloud to have everything configured for you out of the box. Elastic Cloud deployments include native connectors for Enterprise Search deployments of at least 4GB of memory.
I thought that might be the case after I posted the first message here. But I ran into some issues trying to add the connectors-python into a new docker image. I guess I will continue to work on it.
Sorry for the late reply, I got a container built with connectors-ruby installed, but the it doesn't connect to elasticsearch. I am getting the following error in the docker logs
/root/.rbenv/versions/2.6.9/lib/ruby/gems/2.6.0/gems/httpclient-2.8.3/lib/httpclient/ssl_socket.rb:103:in `connect': SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError)
Obviously a certificate error but I don't see where to define the ca cert for elasticsearch in connectors-ruby/CONFIG.md at main · elastic/connectors-ruby · GitHub. The python connector does have settings for ssl: and ca_certs:, however adding those settings to connectors.yml didn't seem to work for the ruby connector.
Hi @m.hanna, could you please try setting an environment variable SSL_CERT_FILE=<PATH_TO_YOUR_CERT> and then launching the connector? This will have to be added to the Makefile or a custom bash script if you're using it.
If that works please let me know and I'll follow up with feedback on our documentation.
Thanks. Adding SSL_CERT_FILE to the environment and adding ca.crt as a secret in the docker-compose file worked.
If interested, here is the snippet for the connector:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.