Logstash processing twice then 'block_in_start_workers'

Using Logstash to get information from Oracle Enterprise Manager, keep getting an error relating to multi_recieve with 'block in start_workers' repeating over and over whenever Logstash tries again. I have the below conf file that queries every 2 minutes, I'm exporting this data to Grafana and can see that it runs twice and then I just see the block. Is it something with my query that's resulting in this? Do I need a different way of running Logstash? I'm running it on Windows via bash terminal, command "./logstash -f oem.conf". Thanks for any info that can be given.

Here is my conf file:

    input {
        jdbc {
                jdbc_validate_connection => true
                jdbc_connection_string => "connection"
                jdbc_user => "username"
                jdbc_password => "password"
                jdbc_driver_class => ""
                statement => "SELECT * FROM V$ACTIVE_SESSION_HISTORY"
                schedule => "*/2 * * * *"
        }
}
filter
{
        if [@timestamp] {
                mutate {
                        add_field => { "logstash_timestamp" => "%{@timestamp}" }
                }
        }
        mutate { remove_field => [ "force_matching_signature" ] }
        mutate { convert => [ "sample_time" , "string" ] }
        date { match => [ "sample_time" , "ISO8601" ] }
}
output {
        elasticsearch { hosts => ["localhost:9200"] }
        # stdout { codec => rubydebug }
}

Can you paste the actual error message (and possibly the surrounding messages)?

[2020-01-23T08:54:56,318][ERROR][logstash.outputs.elasticsearch][main] An unknown error occurred sending a bulk request to Elasticsearch. We will retry indefinitely {:error_message=>""\xE5" from ASCII-8BIT to UTF-8", :error_class=>"LogStash::Json::GeneratorError", :backtrace=>["C:/Users/aiuhmb3/bin/logstash-7.5.1/logstash-core/lib/logstash/json.rb:27:in jruby_dump'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:2584:in map'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:119:in block in bulk'", "org/jruby/RubyArray.java:1800:in each'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/http_client.rb:117:in bulk'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:302:in safe_bulk'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:205:in submit'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:173:in retrying_submit'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.2.3-java/lib/logstash/outputs/elasticsearch/common.rb:38:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:118:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:101:in multi_receive'", "C:/Users/aiuhmb3/bin/logstash-7.5.1/logstash-core/lib/logstash/java_pipeline.rb:251:in `block in start_workers'"]}

So based on this, I actually think it's something that I configured with Elasticsearch. I know this originated as a Logstash post, but is there anything I can do to my Elasticsearch index to have it handling my database query? My index is properly mapped, my Kibana and Grafana instances show documents in the index.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.