Memory issues on querying (Elastic search 1.4.2)

Hello everyone,
I have been using elastic search version 1.4.2 to store logs of my entire application and retrieve the same using my UI application for troubleshooting.

Logs content is huge and I am storing all logs in total 31 indiex (ranging from 1 to 31) for every day of the month.

Each index contains about 10 millions of data (about 8-10GB)

When indexing and storing these data, there are no issues, but only when I query these huge indexes (specially when multiple indexes are queried together), it causes out of memory exceptions and elastic search process is hung.

Query does not use any aggregation, but simple search across different fields using term filter and sort by date field

Is there any guideline on how much heap size should be allocated to process or any specific configuration that suits this kind of scenarios?

I have been using mlockall as true and ES_HEAP_SIZE of 4GB. All indexes (1 to 31) are created with 5 shards per index.

This has been a big issue for me.

Please advise.

Regards,
Sagar Shah