I am using this with Tableau, I had to "flat" my data (remove nested JSON), then with Spark HiveThriftServer2 I can connect from Tableau. It's not crazy and very easy to setup.
But it can be a bit slow, because Spark has to fetch all data from ES via HTTP before processing the SQL query. Be sure that Spark workers are located close to ES nodes, also check "CACHE TABLE" from SparkSQL.
Another way is to export your ES data to a big JSON file, and load this file with Spark. This is the best way to play with, no nested JSON issue, just pleasure
Don't forget to add the elastichsearch-hadoop JAR in the job class path, here how I am running the hive server:
/usr/local/spark/bin/spark-submit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 \
- elasticsearch-hadoop.jar is located in /opt/jars in my install.