About ES8.10.4 pytorch_inference

I'm using Elasticsearch 8.10.4. I'm aiming to perform vector searches using a custom model through eland. You can find more details on this at NLP를 배포하는 방법: 텍스트 임베딩 및 벡터 검색 | Elastic Blog.

I'm facing a long indexing time when reindexing. In my Elasticsearch cluster, there are three nodes, but only one of them has the pytorch_inference process running.

I've checked the ES8 code and verified that pytorch_inference is supposed to run, but I'm curious about why it's running on only one node and not on all of them.

The Elasticsearch indexing speed is extremely slow, indexing only about 20,000 documents per hour. Is this the expected performance?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.