Logstash is blocked by elasticsearch-setup-passwords

this a very bad procedure i have used this command for create password for kibana

elasticsearch-setup-passwords

but logstash is non started

image

Can you provide more context here?

Logstash has nothing to do with elasticsearch-setup-password and from the screenshot you share you are shutting down your server.

Depending on the configuration or the number of events Logstash can take a long time to shutdown.

now i have load linux machine in safety mode have this

Oct 20 17:01:00 elkserver logstash[1693]: [2022-10-20T17:01:00,759][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/

my config is

output {

		elasticsearch {
			hosts => ["localhost:9200"]
			user => elastic
                        password => password
			index => "fsbroker"
		}
	

		stdout { codec => rubydebug }
}

and logstash.yml

is necessary ?

xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: password

i' ve tested with curl and is ok

curl -u logstash_system http:/localhost:9200

complete log

● logstash.service - logstash
     Loaded: loaded (/lib/systemd/system/logstash.service; enabled; vendor preset: enabled)
     Active: deactivating (stop-sigterm) since Thu 2022-10-20 16:59:19 UTC; 18min ago
   Main PID: 1693 (java)
      Tasks: 38 (limit: 9286)
     Memory: 842.3M
        CPU: 1min 14.199s
     CGroup: /system.slice/logstash.service
             └─1693 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -Djruby.regexp.interruptible=true -Djdk.io.File.enableADS=true --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-opens=java.base/java.security=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio.channels=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/sun.management=ALL-UNNAMED -cp /usr/share/logstash/vendor/jruby/lib/jruby.jar:/usr/share/logstash/logstash-core/lib/jars/checker-qual-3.12.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.15.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/commons-logging-1.2.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.11.0.jar:/usr/share/logstash/logstash-core/lib/jars/failureaccess-1.0.1.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.15.0.jar:/usr/share/logstash/logstash-core/lib/jars/guava-31.1-jre.jar:/usr/share/logstash/logstash-core/lib/jars/httpclient-4.5.13.jar:/usr/share/logstash/logstash-core/lib/jars/httpcore-4.4.14.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.13.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.13.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.13.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.13.3.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-yaml-2.13.3.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.1.0.jar:/usr/share/logstash/logstash-core/lib/jars/javassist-3.29.0-GA.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-3.0.2.jar:/usr/share/logstash/logstash-core/lib/jars/jvm-options-parser-8.4.3.jar:/usr/share/logstash/logstash-core/lib/jars/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-1.2-api-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-jcl-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.17.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/reflections-0.10.2.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.32.jar:/usr/share/logstash/logstash-core/lib/jars/snakeyaml-1.30.jar org.logstash.Logstash --path.settings /etc/logstash

Oct 20 17:16:57 elkserver logstash[1693]: [2022-10-20T17:16:57,526][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Oct 20 17:16:57 elkserver logstash[1693]: [2022-10-20T17:16:57,722][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[terza]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}, {"thread_id"=>31, "name"=>"[terza]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}]}}
Oct 20 17:17:02 elkserver logstash[1693]: [2022-10-20T17:17:02,534][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Oct 20 17:17:02 elkserver logstash[1693]: [2022-10-20T17:17:02,758][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[terza]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}, {"thread_id"=>31, "name"=>"[terza]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}]}}
Oct 20 17:17:07 elkserver logstash[1693]: [2022-10-20T17:17:07,543][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Oct 20 17:17:07 elkserver logstash[1693]: [2022-10-20T17:17:07,782][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[terza]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}, {"thread_id"=>31, "name"=>"[terza]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}]}}
Oct 20 17:17:12 elkserver logstash[1693]: [2022-10-20T17:17:12,549][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Oct 20 17:17:12 elkserver logstash[1693]: [2022-10-20T17:17:12,804][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[terza]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}, {"thread_id"=>31, "name"=>"[terza]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}]}}
Oct 20 17:17:17 elkserver logstash[1693]: [2022-10-20T17:17:17,567][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
Oct 20 17:17:17 elkserver logstash[1693]: [2022-10-20T17:17:17,828][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[terza]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}, {"thread_id"=>31, "name"=>"[terza]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.6.0/gems/stud-0.0.23/lib/stud/interval.rb:95:in `sleep'"}]}}

You are getting a 401, which is Unauthorized, this means that the user or password you are using in your output is wrong, you need to double check it.

Oct 20 17:17:17 elkserver logstash[1693]: [2022-10-20T17:17:17,567][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

Are you monitoring Logstash? If you are not monitoring you can remove it, also if I'm not wrong self-monitoring like this is deprecated, if you want to monitor your stack you should use metricbeat.

ok but exist other file for setting a user and pwd?

this section of logstash.yml?

X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: true
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: niculine

Your error is the username or password set in your logstash output, you need to make sure that you are using the correct username or password in the output, you set the username and password in Elasticsearch.

i use pipelines.yml

- pipeline.id: terza
  path.config: "/logstash_dir/terza.conf"
  queue.type: persisted

*terza.conf*


input {
    beats {
      port => 5044
      include_codec_tag => false
      ssl => true
      ssl_certificate => "/etc/ssl/certs/logstash-forwarder.crt"
      ssl_key => "/etc/ssl/private/logstash-forwarder.key"
     
    }
}



output {

		elasticsearch {
			hosts => ["127.0.0.1:9200"]
			user => "elastic"
                        password => "password"
			index => "fsbroker"
		}
	

		stdout { codec => rubydebug }
}

Yeah, but the error you got is an Unauthorized error, which means that your user or password is not correct, you need to check that, it has nothing to do with the pipeline, it is an authentication error.

even if I remove the pipelines from always the same error
but I don't understand where to put the user and pwd of logstash
as I have already done for kibana

in kibana is ok the connection

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
#server.host : "localhost"
elasticsearch.username: "kibana_system"
elasticsearch.password: "niculine"

password is correct


 curl -u logstash_system http:/localhost:9200
Enter host password for user 'logstash_system':
{
  "name" : "elkserver",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "qXAcYsvlTomODPG5A0r1Fg",
  "version" : {
    "number" : "8.4.3",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "42f05b9372a9a4a470db3b52817899b99a76ee73",
    "build_date" : "2022-10-04T07:17:24.662462378Z",
    "build_snapshot" : false,
    "lucene_version" : "9.3.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"

lostash_system is not the user you are using in your output.

In your output you have this:

elasticsearch {
    hosts => ["localhost:9200"]
    user => elastic
    password => password
    index => "fsbroker"
}

So your elasticsearch output is using the elastic user to index the data, and since you are getting error 401 from your log, this means that the password that you used here is wrong.

You need to double check this password, the error is pretty clear.

[2022-10-20T17:17:17,567][WARN ][logstash.outputs.elasticsearch][terza] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

It tells you the source of the error, in this case logstash.outputs.elasticsearch, the pipeline id, in this case terza, and what is the error message, in this case Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'.

The 401 is the http response code for unauthorized.

401 UNAUTHORIZED
The request has not been applied because it lacks valid authentication credentials for the target resource.

i have used different user and all have problem, logstash process 200% of cpu

but at final i have installed logstash in other machine and now in functionally

i think logstash has serious problem when installed in same machine of elasticsearch

Maybe it's better not to use it at all and try FileBeat => Elastic directly

Something was wrong in your configuration, a 401 response from Elasticsearch means that the user or password is wrong.

If Logstash cannot talk to Elasticsearch it will keep retrying forever and this can increase the CPU usage of the Logstash process, in these cases you need to stop Logstash, fix the issue and start again.

Elastic recommends running Elasticsearch as the only process in the server, this is documented.

You may be able to run both Logstash and Elasticsearch in the same machine, but if they will run ok or not depends entirely on the specs of the server. I would not recommend running both in the same server in production.

It depends on your use case, if you just want to ship logs to Elasticsearch without applying any filter to transform your data, than beats is the way to go as it needs way less resources.

Even if you need to do some transformation in your data you may be able to do that with Filebeat or using Elasticsearch Ingest Pipelines.

Logstash is more of an advanced use case, it is pretty powerful and flexible, there things that you are not able to do with Filebeat or Elasticsearch Ingest Pipelines, but you can do with Logstash.

logstash is developed in java?

I could see the sources and start it from an IDE to understand when it starts to occupy 200%

Yes, Logstash is developed in Java and Ruby, the source code is available in Github.

But as I said, this is not an issue, you had an error that made Logstash retry a connection forever, it is expected that this will increase the CPU usage of Logstash, you should stop Logstash and fix the error.

Logstash is more CPU bound than memory, this means that it performance relies more on CPU than memory.

Are you still having issues? What is the specs of your server?

hello now is working but I noticed that if I restart service for 1 min remains at 200%

  • and then normal*

server is ubuntu server 22.04
2 vcpu
8giga ram

This is pretty small, but it depends on the use case and can work for you.

Again, this is expected, Logstash is a java application and it use a considerable amount of resources when starting the JVM.

If you are having any issue you should open another topic, but everything you shared is expected.