Yeah, this is unfortunately a hard limitation on the agg framework right now. All aggs convert the values to a double before operating on them. I'm not sure if that'll change, at least easily... it's a pretty fundamental part of how the agg framework operates.
Before anything else, I think it should be noted that losing precision usually isn't a problem for IDs and the like. 64bit doubles have 52 bit mantissa's, so the maximum non-float numeric you can store is 2^53. See this SO answer for a breakdown on why. So that gives you values 0-9 quadrillion (roughly) that can be stored without loss of precision. For most folks that's fine, but if you expect to have a max > 2^53 then it's a concern.
As a workaround, you could do a search with
size:1 and sort by the value descending, which will give you the document with the highest value, and you can extract it from the source. Not a great solution, since it only really applies to min/max and not the other aggs, but it may help.
We've stayed away from BigDecimal so far because it's just really, really slow. The overhead of using it is just a no-go for performant aggs. If/when we want support for Longs (or other non-floats) I think we'd probably just integrate it as a framework feature... defaulting to non-float when applicable (min/max/sum is fine, avg never, etc)