EsHadoopInvalidRequest: Malformed scrollId caused by es.scroll.limit

(geantbrun) #1

The following simple command in Spark 2.2.0 works well:

df ="org.elasticsearch.spark.sql") 
.option("es.nodes", "{my es nodes}")
.option("es.port", "9200")
.option("", "") 
.option("", "") 
.option("", "")
.option("", "")
.option("", "")
.option("", "") 
.option("pushdown", "true") 
.load("{my index/type}")

If I add the option:


I get the following error:

{...} Caused by: 
      ElasticsearchIllegalArgumentException[Malformed scrollId []]

Anyone can help?
Thank you

(James Baiera) #2

Could you collect TRACE level logs for the job as well as a stack trace for your error and share them here?

(geantbrun) #3

May I ask you how to collect the TRACE level logs? Do I have to update the located in the spark/conf directory as seen here?

Stack trace (I had to cut some lines because body of my answer was too long). Note that the the first was working but that the second one caused an error:
In [7]:
| date|
only showing top 20 rows

In [8]:
17/05/17 15:54:14 WARN TaskSetManager: Lost task 0.0 in stage 7.0 (TID 41,, executor 2): ElasticsearchIllegalArgumentException[Malformed scrollId []]


at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)

17/05/17 15:54:14 ERROR TaskSetManager: Task 0 in stage 7.0 failed 4 times; aborting job

Py4JJavaError Traceback (most recent call last)
in ()
----> 1

/usr/local/spark/python/pyspark/sql/dataframe.pyc in show(self, n, truncate)
315 """
316 if isinstance(truncate, bool) and truncate:
--> 317 print(self._jdf.showString(n, 20))
318 else:
319 print(self._jdf.showString(n, int(truncate)))

/usr/local/spark/python/lib/ in call(self, *args)
1131 answer = self.gateway_client.send_command(command)
1132 return_value = get_return_value(
-> 1133 answer, self.gateway_client, self.target_id,
1135 for temp_arg in temp_args:

/usr/local/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
317 raise Py4JJavaError(
318 "An error occurred while calling {0}{1}{2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(

Py4JJavaError: An error occurred while calling o110.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 45,, executor 0): ElasticsearchIllegalArgumentException[Malformed scrollId []]

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1456)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1456)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:803)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:803)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1684)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1628)
at org.apache.spark.util.EventLoop$$anon$
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2015)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2036)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
at org.apache.spark.sql.Dataset$$anonfun$57.apply(Dataset.scala:2769)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2768)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2325)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at py4j.reflection.MethodInvoker.invoke(
at py4j.reflection.ReflectionEngine.invoke(
at py4j.Gateway.invoke(
at py4j.commands.AbstractCommand.invokeMethod(
at py4j.commands.CallCommand.execute(
Caused by: ElasticsearchIllegalArgumentException[Malformed scrollId []]

at java.util.concurrent.ThreadPoolExecutor$
... 1 more

(James Baiera) #4

I believe that logging solution should work. Could you also include which versions of Elasticsearch and ES-Hadoop you are using?

(geantbrun) #5

ES 1.7.1 and elasticsearch-spark-20_2.11-5.1.2.
Sorry for the question but I don't know which line to update in the following properties of log4j file:

# Set everything to be logged to the console
log4j.rootCategory=INFO, console
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Set the default spark-shell log level to WARN. When running the spark-shell, the
# log level for this class is used to overwrite the root logger's log level, so that
# the user can have different defaults for the shell and regular Spark apps.

# Settings to quiet third party logs that are too verbose$exprTyper=INFO$SparkILoopInterpreter=INFO

# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive     support

Should I update the line (I'm in the pyspark shell) and set TRACE instead of WARN?

(James Baiera) #6

This looks like it might be a backwards compatibility bug with Elasticsearch 1.7. Could you open an issue for this on Github?

(geantbrun) #7

Sure. Could you tell me where please? I don't want to open the issue at the wrong place!

(James Baiera) #8

Thank you!

(geantbrun) #9

Done. Thanks James.

(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.