Pyspark es.query not working. only default "match_all" works


In pypspark the only way I can get data returned from ES is by leaving es.query default. Why is this?

es_query = {"match" : {"key" : "value"}}
es_conf = {"es.nodes" : "localhost", "es.resource" : "index/type", "es.query" : json.dumps(es_query)}
rdd = sc.newAPIHadoopRDD(inputFormatClass="",keyClass="",valueClass="", conf=es_conf)

ValueError: RDD is empty

Yet when,
es_query = {"match_all" : {}}

(u'2017-09-01 01:02:03)

*I have tested the queries by directly querying elastic search and they work so it is something wrong with spark/es-hadoop.

(James Baiera) #2

@buster Are you specifying the query as a string or as a map of maps? It looks like you're omitting the quotes needed to make the query a string in your posted example.

(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.