We are testing the elastic connectors docker container as a replacement for monstache to push mongodb documents into elasticsearch.
I have configured the connector(s) and done an manual sync (which all completed fine), but the system shows that I have orphaned syncs. The help says that orphaned syncs can be caused when the connector can't be found, but they showed up as the manual syncs were run successfully on existing connectors.
GET _connector/_sync_job
only shows a count of 54 which matches the number of manual syncs that I did on my 54 connectors.
Where do these orphaned syncs come from and how do I get rid of them?
hi @m.hanna ,
Sorry for the frustration that this is causing. Can you share what version of Connectors and Elasticsearch you're running on?
"Orphaned syncs" are identified and flagged in an attempt to help users notice if they have a lot of sync job history being stored that's no longer relevant, because the associated connector has been deleted. You can find the query that's calculating this number here: kibana/x-pack/plugins/enterprise_search/server/utils/get_sync_jobs_queries.ts at ef3bc96e52f6c21bd1543d2cb48acef31f56022e · elastic/kibana · GitHub
Can you double check that for all the sync jobs you've got listed, their connector.id
corresponds to an ID that you can see in in the list from GET _connector
? My first assumption would be that you currently have 54 connectors, and you have 54 syncs jobs, but that only 10 of those sync jobs were run for connectors that you still have.
If that theory proves incorrect, it's possible that we have a bug in our connector APIs, and we're pre-filtering out orphaned jobs when you do GET _connector/_sync_job
. You can compare with GET .elastic-connectors-sync-jobs/_search
(note that direct access to this internal index will eventually go away, so using the connector APIs is the better habit).
how do I get rid of them?
You delete the sync job records that were associated with a connector that no longer exists. They aren't hurting anything though, at least not on this small scale. It's more concerning for cluster performance if you have thousands or millions of them taking up space needlessly.
I am running the elastic-connectors v8.15.2.0 docker container. I have elasticsearch v8.15.0 and kibana v8.15.0 (I do see the warning about the minor version not being the same in the logs, but the docs indicated this should still be ok).
I have 54 configured and connected connectors. They are all going to the same mongodb database, but to different collections. I have no scheduled sync jobs at this time. All syncs were run manually one at a time to ensure that they could connect and pull documents before scheduling sync jobs.
All of the 54 connectors still exist.
I do have a postgresql connector configured in the config.yml, but it has not been configured and connected through the kibana ui yet and it has no sync jobs scheduled.
GET .elastic-connectors-sync-jobs/_search
looks to have the same results as GET _connector/_sync_job
- a count of 54 which matches the 54 successful manual syncs.
I agree that they are not hurting anything now, but if we start doing scheduled syncs on our production system that has many more records to sync than this test system, the number of orphaned syncs may grow exponentially. This would probably also raise a concern if syncs are not completed correctly and that we might be missing documents in elasticsearch.
Thanks for the quick response. Let me see if I can reproduce, and I'll get back to you.
@m.hanna thanks for flagging this. I was able to reproduce the issue.
It seems that the problem occurs when you get over 10 connectors. I expect that we have a bug somewhere where we are not paginating results, and are instead assuming a single first page is the full total.
I've filed Connector UI may list syncs as "orphaned" when > 10 connectors exist · Issue #195127 · elastic/kibana · GitHub to track the issue, feel free to watch it. Unfortunately I don't have a workaround at this time, but I think you can safely ignore it, and hopefully these will go away for you when you're able to update to the next minor version of Kibana.
1 Like