Sorry you're experiencing issues with indexing large amounts of data into App Search. I hope we can help you resolve those issues ASAP.
As Jason and Orhan has already pointed out, we may need a bit more information to troubleshoot your specific situation.
We were able to write upto 14 mill records, after that the app-search is crashing with too many open connections errors. I have rebooted the server but no luck.
This is concerning since we've never seen anything like that before and it'd be extremely helpful to get more information on the following:
What specific errors are you seeing and where?
Could you run the following command on the server running App Search when the issue is happening?
$ ps axuww | grep java | grep app-search
^ Run this and get the value of the process id
(a numeric value of the second column)
Then run the following and share the results with
us if possible (PROCESS_ID is the process if value from the previous command)
$ lsof -n -p PROCESS_ID
it is processing only 100,000 records per day
Ingest rates are extremely dependent on the code performing the ingestion. If you need to ingest a lot of data relatively quickly, you need to ensure you're using batch indexing requests (up to 100 docs in one batch), using multiple parallel processing requests (processes, threads, etc depending on the code that does the ingestion) and are monitoring the health of your Elasticserarch cluster to ensure it is able to keep up with the data you're pushing into it. I'm fairly certain App Search could handle a lot more in a day than the numbers you're seeing, so I'd recommend not settling for that number and looking into ways to dramatically increase it.
Is there any limit in the app-search
There are no limits on the number of records indexed into the system or on the ingestion rate. It all depends on available resources and proper sizing of components (mainly Elasticsearch).
Thank you for using our product!
App Search Tech Lead