Can you clarify: Are you trying to index 50K different sites (website domains), or 50K different pages within one website?
The crawler should be able to crawl 50K pages on a website, but you'll need to increase these settings in enterprise-search.yml:
#crawler.crawl.max_unique_url_count.limit: 10000
#crawler.crawl.max_duration.limit: 86400 # seconds
As for 50K sites... This is an interesting use case! I'd love to learn more about it. I don't know any reason why it wouldn't work, but I have not tried this many.
initiating a crawl started and then promptly died!
Could you share any error messages from the application logs under log/crawler.log?
Have you configured any crawl rules for any sites? If the crawl rules reject the entry point URL itself, the crawler will start and stop immediately, thinking it has nothing to do.