Having run into the same issue today, in my case it was caused by the fact that logstash itself failed to start correctly due to wrong configuration.
However, instead of stopping and exiting, the process remained alive and continuously printing this error to the log.
Make sure your logstash configuration is correct and that there are no other errors when it starts.
I've had this same issue, but there is nothing wrong with the configuration - nothing has changed and I was simply using the same mysql java connector I had used on tons of 1G+ files before... but now the process stays alive and doesn't exit when complete.
I believe I am seeing the same thing with CSV imports I have been doing.
Any thoughts on connector issues after this update?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.