instead of just bumping this, you could actively provide some more information. I am sure, that increasing the number of CPUs is not the core of your problem.
Which logstash version are you using?
Which JVM do you run on?
How did you install logstash? (RPM, DEB, zip, etc)
Which Elasticsearch version are you using?
Which JVM do you run on?
How did you install Elasticsearch? (RPM, DEB, zip, etc)
Did you check the Elasticsearch log files?
How does you logstash config look like?
How does your Elasticsearch config look like? What is the difference from a standard configuration?
Can you reproduce this with a regular HTTP request?
so, have you checked all the log files of your nodes for the above error messages? It has to come from somewhere and it is likely to be logged. Is there a stack trace? Thats one of the more important bits to get.
Is there any special configuration in the elasticsearch yml file.
Is there any special configuration in the logstash configuration file? Can you show the elasticsearch output configuration section?
given the error message, I am still pretty sure somewhere must be an exception in the logs - at least I hope so to move forward. Either in the node which received the bulk requests or one of the data nodes where parts of the bulk request were forwarded to.
Can you search for Message not fully read on all of your nodes in the logs? I am pretty surprised by that message, given you are using the same version on all of your nodes, so I would like to get my hands on a stack trace. The message indicates, that on the transport protocol level an unexpected message length was sent from one node to another. On HTTP level from logstash to ES this means everything was fine.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.