I've got some success working with Spark SQL CLI to access our ES data.
The pushdown feature is working great!
However, I'm stuck with how to specify a date range in the WHERE clause, so that it gets pushed down to ES?
SELECT count(member_id),member_id FROM data WHERE ( response_timestamp > CAST('2015-03-01' AS date) AND response_timestamp < CAST('2015-03-31' AS date) ) GROUP BY member_id ;
but it doesn't push down the range filter to ES.
Is this possible with the current es-hadoop version in the first place?