yuecong  
              
                  
                    July 16, 2019,  6:06am
                   
                  4 
               
             
            
              hmm. now kibana is back. could you suggest what does that mean in the kibana logs?
I am using Spark ES connector to dump data into the ES cluster, I got the similar error from the spark side as this ticket
  
  
    I am using spark streaming to dump data from Kafka to ES and I got the following errors. 
org.apache.spark.sql.streaming.StreamingQueryException: Job aborted due to stage failure: Task 6 in stage 4888.0 failed 4 times, most recent failure: Lost task 6.3 in stage 4888.0 (TID 58565, 10.139.64.27, executor 256): org.apache.spark.util.TaskCompletionListenerException: org.elasticsearch.hadoop.rest.EsHadoopRemoteException: circuit_breaking_exception: [parent] Data too large, data for [<http_request>]…
   
 
But for that one, I can easily work around to restart the spark jobs.