How large are the documents? How complex are they? Have you defined and optimized the mappings?
What is the calculation you are talking about? 2 bllion events in 10 days is just a bit over 2k events per second. That sounds quite low unless the documents are quite large and complex.
The indexing throughput also depends on the specification and size of your cluster. Do you have any details on this?
I would expect extracting 2 billion events from MySQL would take a reasonably long time.
If it potentially takes a lot longer than a few hours to extract the data from MySQL, why does it have to be uploaded to Elasticsearch in a shorter timeframe than that? Can you elaborate a bit more on the use case and requirements?