Using Logstash to get information from Oracle Enterprise Manager, keep getting an error relating to multi_recieve with 'block in start_workers' repeating over and over whenever Logstash tries again. I have the below conf file that queries every 2 minutes, I'm exporting this data to Grafana and can see that it runs twice and then I just see the block. Is it something with my query that's resulting in this? Do I need a different way of running Logstash? I'm running it on Windows via bash terminal, command "./logstash -f oem.conf". Thanks for any info that can be given.
[2020-01-23T08:54:56,318][ERROR][logstash.outputs.elasticsearch][main] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>""\xE5" from ASCII-8BIT to UTF-8", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["C:/Users/aiuhmb3/bin/logstash-7.5.1/logstash-core/lib/logstash/json.rb:27:in jruby_dump'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:2584:in map'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:1800:in each'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:302:in safe_bulk'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:205:in submit'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:173:in retrying_submit'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/logstash-core/lib/logstash/java_pipeline.rb:251:in `block in start_workers'"]}
So based on this, I actually think it's something that I configured with Elasticsearch. I know this originated as a Logstash post, but is there anything I can do to my Elasticsearch index to have it handling my database query? My index is properly mapped, my Kibana and Grafana instances show documents in the index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.