Logstash: [ERROR][logstash.javapipeline ][main] Pipeline error | Cannot get new connection from pool

I am trying to start logstash pipeline. But after long search, I do not know what I am doing wrong. I am getting this output:

C:\Users\Name\ElasticStack\logstash-8.0.1>.\bin\logstash.bat -f erste-pipeline.conf
"Using bundled JDK: ."
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to C:/Users/Name/ElasticStack/logstash-8.0.1/logs which is now configured via log4j2.properties
[2022-03-25T13:27:40,256][INFO ][logstash.runner          ] Log4j configuration path used is: C:\Users\Name\ElasticStack\logstash-8.0.1\config\log4j2.properties
[2022-03-25T13:27:40,272][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.0.1", "jruby.version"=>"jruby (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [mswin32-x86_64]"}
[2022-03-25T13:27:40,272][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-03-25T13:27:40,365][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-03-25T13:27:42,138][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-03-25T13:27:43,228][INFO ][org.reflections.Reflections] Reflections took 94 ms to scan 1 urls, producing 120 keys and 417 values
[2022-03-25T13:27:44,231][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-03-25T13:27:44,278][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://localhost:9200"]}
[2022-03-25T13:27:44,559][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_internal:xxxxxx@localhost:9200/]}}
[2022-03-25T13:27:45,137][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@localhost:9200/"}
[2022-03-25T13:27:45,153][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.0.1) {:es_version=>8}
[2022-03-25T13:27:45,153][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-03-25T13:27:45,215][INFO ][logstash.outputs.elasticsearch][main] Config is compliant with data streams. `data_stream => auto` resolved to `true`
[2022-03-25T13:27:45,215][INFO ][logstash.outputs.elasticsearch][main] Config is compliant with data streams. `data_stream => auto` resolved to `true`
[2022-03-25T13:27:45,215][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-03-25T13:27:45,262][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-03-25T13:27:45,340][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["C:/Users/Name/ElasticStack/logstash-8.0.1/erste-pipeline.conf"], :thread=>"#<Thread:0x1a8d77f4 run>"}
[2022-03-25T13:27:46,105][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.7}
[2022-03-25T13:27:46,136][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>""}
[2022-03-25T13:27:47,261][ERROR][logstash.javapipeline    ][main] Pipeline error {:pipeline_id=>"main", :exception=>#<Elasticsearch::Transport::Transport::Error: Cannot get new connection from pool.>, :backtrace=>["C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/transport/base.rb:282:in `perform_request'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/transport/http/manticore.rb:85:in `perform_request'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-7.17.0/lib/elasticsearch/transport/client.rb:197:in `perform_request'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.0/lib/elasticsearch.rb:93:in `elasticsearch_validation_request'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.0/lib/elasticsearch.rb:51:in `verify_elasticsearch'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-7.17.0/lib/elasticsearch.rb:40:in `method_missing'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-7.17.0/lib/elasticsearch/api/actions/ping.rb:38:in `ping'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.12.1/lib/logstash/inputs/elasticsearch.rb:479:in `test_connection!'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/logstash-input-elasticsearch-4.12.1/lib/logstash/inputs/elasticsearch.rb:243:in `register'", "C:/Users/Name/ElasticStack/logstash-8.0.1/vendor/bundle/jruby/2.5.0/gems/logstash-mixin-ecs_compatibility_support-1.3.0-java/lib/logstash/plugin_mixins/ecs_compatibility_support/target_check.rb:48:in `register'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:232:in `block in register_plugins'", "org/jruby/RubyArray.java:1821:in `each'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:231:in `register_plugins'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:390:in `start_inputs'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:315:in `start_workers'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:189:in `run'", "C:/Users/Name/ElasticStack/logstash-8.0.1/logstash-core/lib/logstash/java_pipeline.rb:141:in `block in start'"], "pipeline.sources"=>["C:/Users/Name/ElasticStack/logstash-8.0.1/erste-pipeline.conf"], :thread=>"#<Thread:0x1a8d77f4 run>"}
[2022-03-25T13:27:47,261][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2022-03-25T13:27:47,308][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2022-03-25T13:27:47,386][INFO ][logstash.runner          ] Logstash shut down.
[2022-03-25T13:27:47,386][FATAL][org.logstash.Logstash    ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-]
        at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-]
        at C_3a_.Users.Name.ElasticStack.logstash_minus_8_dot_0_dot_1.lib.bootstrap.environment.<main>(C:\Users\Name\ElasticStack\logstash-8.0.1\lib\bootstrap\environment.rb:94) ~[?:?]

My pipeline.conf: If its even correct..

# The # character at the beginning of a line indicates a comment. Use
# comments to describe your configuration.
input {
	beats {
        	port => "5044"
 	 elasticsearch {
    		user => logstash_internal
    		password => logstash
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
# }
output {
    elasticsearch {
        hosts => [ "https://localhost:9200" ]
	ssl => true
	cacert => 'C:\Users\Name\ElasticStack\logstash-8.0.1\config\certs\http_ca.crt'
	user => logstash_internal
    	password => logstash

Thanks for any help.

what is the reason for this as part of the beats input ?

That Logstash is able to manage index templates, create indices, and write and delete documents in the indices it creates. Thats how understood this: https://www.elastic.co/guide/en/logstash/current/ls-security.html

if you are getting your data / inputing your data from Elasticsearch... then your input {} would be just Elasticsearch.... if you are getting data from a beats module, it will be like so....

input {
  beats {
    port => 5044

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}" 

What is sending your data to Logstash ?

The output part is for sending that received Logstash data somewhere, in your case Elasticsearch, which is where the credentials and roles come into place.


If I use this exact output Logstash is running but says to me: trying to connect to a dead ES instance:

[2022-03-25T14:34:30,951][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}

Elasticsearch is refusing the input, because it is plaintext http traffix on https channel:

[2022-03-25T14:34:51,128][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [SVVBMLOGK01] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/, remoteAddress=/}

I thought I could do it with this:

So if I put this in and try it, I am getting this response:

[2022-03-25T14:47:31,873][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://localhost:9200/'"}

If I put it with the user and password, it seems to work that I get the Logs now, I see them in Logstash, but I do not get any output in Kibana. It says it can not write there:

[2022-03-25T14:59:19,473][INFO ][logstash.outputs.elasticsearch][main][Long Number] Retrying failed action {:status=>403, :action=>["index", {:_id=>nil, :_index=>"filebeat-8.0.1", :routing=>nil}, {"ecs"=>{"version"=>"8.0.0"}, "tags"=>["beats_input_codec_plain_applied"], LOGMESSAGE "offset"=>25189}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with roles [logstash_writer] on indices [filebeat-8.0.1], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}

I gave the role logstash_writer index privileges shown in the message. But still it does not write into Elasticsearch.

Firstly is your Elasticsearch setup correctly configured for https traffic, if not then you will have problems. As certificates in themselves can be difficult to setup or troubleshoot in other people's setup, i would advise first checking if everything works without SSL / TLS over https.

What errors do you get with suggested setup using http. Also is the address in your Logstash Output {} reachable by Logstash ? You errors show a connectivity error, a security error and a misconfiguration of https.

Easier to start with fixing the connectivity error by using a curl command to see what response you get.

curl -X GET http://<localhost or IP address>:9200/_cluster/health?pretty -u username:password 

curl -X GET https://<localhost or IP address>:9200/_cluster/health?pretty -u username:password -k```

Isn´t it generated automatically? SSL are auto configured in my Elasticsearch.yml
So I do not know, should I deactivate them all manually?

C:\Users\Name>curl -X GET curl -X GET http://localhost:9200/_cluster/health?pretty -u elastic:password
curl: (6) Could not resolve host: curl
curl: (52) Empty reply from server
C:\Users\Name>curl -X GET curl -X GET https://localhost:9200/_cluster/health?pretty -u elastic:password -k
curl: (6) Could not resolve host: curl
  "cluster_name" : "name",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 13,
  "active_shards" : 13,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 92.85714285714286

If i change the user to elastic with the password in the pipline.conf. I am able to get data in. I am seeing that in Kibana.

That was the 401 as the security error. So that user you chose doesn't have privileges to create specific indicies

So still connectivity error at some point and misconfiguration of https? Or does that resolve this?

Also which privileges do I need to give the logstash_writer role to write like elastic user? To reduce the accessability as normally I do not want to put elastic as the highste privileged user in to the pipeline.conf output.

When you changed the user elastic, that confirms you had the connectivity resolved, the fact you had data means everything was working. The usage of logstash writer and it not working is more down to permissions and privileges on which indexes it can manage and write to. What roles are assigned to that user ?

Logstash Writer and Logstash System works usually.... here is my setup using both roles for a User called Logstash Internal

Thanks. I did not specify the filebeat* in indices. That has been the mistake. Thanks. It is working now with the logstash_writer role. Thank you for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.