Fatal Error Block in setup after succesfull connection

Hi there,
I have been trying to solve this problem for a day. I couldn't find any solutions.

    [2020-11-12T20:07:47,943][INFO ][logstash.filters.geoip   ][main] Using geoip database {:path=>"/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"}
    [2020-11-12T20:07:48,183][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
    [2020-11-12T20:07:48,508][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"cowrie-logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"cowrie-logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
    [2020-11-12T20:07:48,704][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/beats-input.conf"], :thread=>"#<Thread:0x700dddcf run>"}
    [2020-11-12T19:53:03,676][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError: LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:80:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:332:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:319:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:414:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:326:in `block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:341:in `exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:359:in `rollover_alias_exists?'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/ilm.rb:91:in `maybe_create_rollover_alias'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/ilm.rb:10:in `setup_ilm'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:50:in `block in setup_after_successful_connection'"]}

I got this error. I am going to send cowrie honeypots logs .
So here is my conf file.

        output {
            if [type] == "cowrie" {
                elasticsearch {
                    hosts => ["127.0.0.1:9200"]
                         user => "logstash_internal"
                        password => "XXXXXXXX"
                         ilm_enabled => auto
                    ilm_rollover_alias => "cowrie-logstash"
                }
                #file {
                #    path => "/tmp/cowrie-logstash.log"
                #    codec => json
                #}
                stdout {
                    codec => rubydebug
                }
            }

I gave privileges to logstash_internal user.

So logstash is able to connect to elasticsearch but when it tries to setup ilm it sends a request to elasticsearch to determine whether the rollover index exists. That is getting an error. The elasticsearch logs may give a better error message to explain why it is rejecting the request.

One possibility would be a 403 (possibly your user has the wrong role ).

Check the elasticsearch logs and enable log.level debug in logstash in case you have a problem with certificates or keys.

1 Like

Thank you Badger.

logstash_internal user has logstash_writer role which has all cluster privileges and as index privileges it has cowrie-logstash-* indicies all privileges.
I changed logging to debug but i am not sure where to look. There are too much information.

There is a debug log just before fatal error. Can it be related?

[2020-11-12T21:38:40,692][DEBUG][logstash.outputs.elasticsearch][main] Found existing Elasticsearch template. Skipping template management {:name=>"cowrie-logstash"}

No, I do not think that is an issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.