Problem indexing ES with batches

Hello. I am doing reindexing in ES with the next command :

nohup curl -XPOST http:/bla:9200/_reindex -d '{ "source": { "index": "origin_index", "size": 5000 },
"dest": {"index": "dest_index" }}' > reindex_100_21.out 2> reindex_100_21.err

We have 25.853.770 documents in origin_idex, and the problem is that We have 25.849.990 in dest_index and no errors in the log... seems that We are loosing the last batch...

Is there any option to indicate this ? Thanks you very much.

What is the refresh interval of the destination index? Did you issue a refresh before counting?

I relaunch the reindex now with another size of batch

nohup curl -XPOST http://ip:9200/_reindex -d '{ "source": { "index": "origin", "size": 50000 }, "dest": {"index": "dest", "op_type": "create" }}' &> reindex_50000.out

But now, i can see this error:

{"took":"2h","timed_out":false,"total":25853780,"updated":0,"created":25700000,"batches":514,"version_conflicts":0,"noops":0,"retries":0,"failures":[{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [32]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [43]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [42]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [33]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [34]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [44]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [36]"}},{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [37]"}}]}

So, that's the reason why We are loosing batches... It's possible we need more timeout_time... Is this possible ? Thanks !!!