Winlogbeat unable to communicate with logstash

doudou@elkserver101:~$ doudou@elkserver101:~$ sudo /usr/share/logstash/bin/logstash -f /home/doudou/elk/pipeline.conf
Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2022-12-22 18:04:48.169 [main] runner - NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[INFO ] 2022-12-22 18:04:48.218 [main] runner - Starting Logstash {"logstash.version"=>"8.5.3", "jruby.version"=>"jruby 9.3.9.0 (2.6.8) 2022-10-24 537cd1f8bc OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-linux]"}
[INFO ] 2022-12-22 18:04:48.228 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[WARN ] 2022-12-22 18:04:49.214 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2022-12-22 18:04:54.652 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[ERROR] 2022-12-22 18:04:55.713 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [0-9], [ \\t\\r\\n], \"#\", \"}\" at line 9, column 14 (byte 135) after input { \n\tfile {\n\t\t path => \"/var/log/apache2/access.log\"\n\t\t #start_position => \"beginning\"\n\t}\n\t\n\tbeats {\n\t\tport => 5044\n\t\thost => 0.0", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:182:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "org/jruby/RubyClass.java:911:in `new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
[INFO ] 2022-12-22 18:04:55.948 [LogStash::Runner] runner - Logstash shut down.
doudou@elkserver101:~$



The pipeline.conf
#################


input {
        file {
                 path => "/var/log/apache2/access.log"
                 #start_position => "beginning"
        }

        beats {
                port => 5044
                host => 0.0.0.0
        }
      
}

filter {

        grok {
                match => { "message" => "%{HTTPD_COMBINEDLOG}" }
        }
}

output {
        stdout {
                codec => rubydebug
        }

        elasticearch {
                hosts => ["localhost:9200"]
                index => "%{[@metadata][beat]}-%{[@metadata][version]}"
        }
       
        file {
                path => "/home/doudou/elk/output2.txt"
        }
}
doudou@elkserver101:~$

New to ELK, I have it installed on Ubuntu 20.04. ELK version 8.5.0.
I was able to get Kibana running with sample data fine.
I was able to ship logs from Windows event log directly to Elasticsearch as well, and visualize in Kibana. Too excited I shut down the lab and built another one to ship logs to Logstash instead of Elasticsearch. And this is when I have issues. Logstash won't even start as you can see the error and its config file above.

Winlogbeat will fail to connect to logstash. See error below, along with Winlogbeat config file

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

  # Pipeline to route events to security, sysmon, or powershell pipelines.
  pipeline: "winlogbeat-%{[agent.version]}-routing"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.0.130:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

Winlogbeat error:

PS C:\Program Files\winlogbeat> .\winlogbeat.exe test output
logstash: 192.168.0.130:5044...
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 192.168.0.130
    dial up... ERROR dial tcp 192.168.0.130:5044: connectex: No connection could be made because the target machine actively refused it.

I have opened ports 9200 5601 5044 among others.

Any idea where to look at in order to resolve this issue?
Bear with me, I started using ELK 2 weeks ago.

Hello and welcome,

Please do not share screenshots from text, share the text using the preformatted button, the </> button.

Sometimes is pretty hard to read the images and some people may not even be able to see it.

Just copy the logs and configurations and share as text.

Thanks @leandrojmp for the quick response. I have updated my original post.

The only way to have logstash running as of now, is to get rid of the beats input plugin and the elasticsearch output plugin.

Add double quotes around 0.0.0.0

Thanks @Badger for jumping in. I have enclosed it with "" but I am getting another error. I figured I need to start with a clean install of ELK, but still getting the error below. Seems that logstash can't properly talk to elasticsearch from the following:

[WARN ] 2022-12-22 22:16:33.955 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2022-12-22 22:16:37.566 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2022-12-22 22:16:39.027 [Converge PipelineAction::Create<main>] Reflections - Reflections took 329 ms to scan 1 urls, producing 125 keys and 438 values
[INFO ] 2022-12-22 22:16:41.267 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2022-12-22 22:16:41.501 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2022-12-22 22:16:42.106 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[INFO ] 2022-12-22 22:16:42.482 [[main]-pipeline-manager] elasticsearch - Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[WARN ] 2022-12-22 22:16:42.523 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[INFO ] 2022-12-22 22:16:42.662 [[main]-pipeline-manager] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[WARN ] 2022-12-22 22:16:42.665 [[main]-pipeline-manager] elasticsearch - Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[INFO ] 2022-12-22 22:16:42.864 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/home/doudou/elk/pipeline.conf"], :thread=>"#<Thread:0x14cf96f run>"}
[INFO ] 2022-12-22 22:16:44.036 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.17}
[INFO ] 2022-12-22 22:16:44.120 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2022-12-22 22:16:44.184 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2022-12-22 22:16:44.444 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2022-12-22 22:16:44.556 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2022-12-22 22:16:47.622 [Ruby-0-Thread-9: :1] elasticsearch - Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[WARN ] 2022-12-22 22:16:47.624 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[INFO ] 2022-12-22 22:16:52.635 [Ruby-0-Thread-9: :1] elasticsearch - Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[WARN ] 2022-12-22 22:16:52.636 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[INFO ] 2022-12-22 22:16:57.644 [Ruby-0-Thread-9: :1] elasticsearch - Failed to perform request {:message=>"localhost:9200 failed to respond", :exception=>Manticore::ClientProtocolException, :cause=>#<Java::OrgApacheHttp::NoHttpResponseException: localhost:9200 failed to respond>}
[WARN ] 2022-12-22 22:16:57.646 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://localhost:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
^C[WARN ] 2022-12-22 22:16:59.186 [SIGINT handler] runner - SIGINT received. Shutting down.
^C[FATAL] 2022-12-22 22:17:00.209 [SIGINT handler] runner - SIGINT received. Terminating immediately..

Current logstash config file only has input and output as follow:

input {
	beats {
		port => 5044
		host => "0.0.0.0"
	}
}

output {

	elasticsearch { 
		hosts => ["localhost:9200"]
		manage_template => false
		index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
	}

	stdout {
		codec => rubydebug
	}
}

Thank you for your time.

When I comment out the elasticsearch output (and only leave the stdout), logstash can start fine with the config file like you can see hereL

doudou@elk-01:~$ sudo /usr/share/logstash/bin/logstash -f /home/doudou/elk/pipeline.conf
[sudo] password for doudou:
Using bundled JDK: /usr/share/logstash/jdk
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2022-12-23 11:13:30.159 [main] runner - NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[INFO ] 2022-12-23 11:13:30.262 [main] runner - Starting Logstash {"logstash.version"=>"8.5.3", "jruby.version"=>"jruby 9.3.9.0 (2.6.8) 2022-10-24 537cd1f8bc OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-linux]"}
[INFO ] 2022-12-23 11:13:30.280 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[WARN ] 2022-12-23 11:13:32.442 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2022-12-23 11:13:40.563 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[INFO ] 2022-12-23 11:13:43.854 [Converge PipelineAction::Create<main>] Reflections - Reflections took 795 ms to scan 1 urls, producing 125 keys and 438 values
[INFO ] 2022-12-23 11:13:48.743 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2022-12-23 11:13:49.646 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/home/doudou/elk/pipeline.conf"], :thread=>"#<Thread:0x40ce50f run>"}
[INFO ] 2022-12-23 11:13:53.460 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>3.8}
[INFO ] 2022-12-23 11:13:53.630 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2022-12-23 11:13:53.741 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2022-12-23 11:13:54.271 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2022-12-23 11:13:54.822 [[main]<beats] Server - Starting server on port: 5044

However, Winlogbeat service on the client machine can't start and automatically stops.

winlogbeat.exe test config ==> Config OK
winlogbeat.exe test output ==> Ok on everything

However, while doing winlogbeat.exe test output and having the ELK console opened, I see the following message:


[INFO ] 2022-12-23 11:24:45.864 [defaultEventExecutorGroup-4-2] BeatsHandler - [local: 192.168.0.130:5044, remote: 192.168.0.65:49763] Handling exception: java.net.SocketException: Connection reset (caused by: java.net.SocketException: Connection reset)
[WARN ] 2022-12-23 11:24:45.865 [nioEventLoopGroup-2-3] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.net.SocketException: Connection reset
        at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
        at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:833) [?:?]
[INFO ] 2022-12-23 11:24:45.856 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 192.168.0.130:5044, remote: 192.168.0.65:49762] Handling exception: java.net.SocketException: Connection reset (caused by: java.net.SocketException: Connection reset)
[WARN ] 2022-12-23 11:24:45.879 [nioEventLoopGroup-2-2] DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.net.SocketException: Connection reset
        at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
        at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:833) [?:?]


I am just trying to be more exhaustive so you guys would better understand where I am at.

Thanks.

And here is my Winlogbeat error log:

{"log.level":"info","@timestamp":"2022-12-26T13:54:46.770-0500","log.origin":{"file.name":"instance/beat.go","file.line":708},"message":"Home path: [C:\\Program Files\\winlogbeat] Config path: [C:\\Program Files\\winlogbeat] Data path: [C:\\Program Files\\winlogbeat\\data] Logs path: [C:\\Program Files\\winlogbeat\\logs]","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-12-26T13:54:46.812-0500","log.origin":{"file.name":"instance/beat.go","file.line":716},"message":"Beat ID: 5fd9c499-6c20-4b6b-a7ff-9716b7108bd7","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2022-12-26T13:54:49.834-0500","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":81},"message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.834-0500","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1082},"message":"Beat info","service.name":"winlogbeat","system_info":{"beat":{"path":{"config":"C:\\Program Files\\winlogbeat","data":"C:\\Program Files\\winlogbeat\\data","home":"C:\\Program Files\\winlogbeat","logs":"C:\\Program Files\\winlogbeat\\logs"},"type":"winlogbeat","uuid":"5fd9c499-6c20-4b6b-a7ff-9716b7108bd7"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.834-0500","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1091},"message":"Build info","service.name":"winlogbeat","system_info":{"build":{"commit":"6d03209df870c63ef9d59d609268c11dfdc835dd","libbeat":"8.5.3","time":"2022-12-04T04:43:16.000Z","version":"8.5.3"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.834-0500","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1094},"message":"Go runtime info","service.name":"winlogbeat","system_info":{"go":{"os":"windows","arch":"amd64","max_procs":2,"version":"go1.18.7"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.840-0500","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1098},"message":"Host info","service.name":"winlogbeat","system_info":{"host":{"architecture":"x86_64","boot_time":"2022-12-26T13:50:10-05:00","name":"IIS-01","ip":["fe80::5ec:63a0:ecf3:4429/64","192.168.0.160/24","fe80::d134:9697:2dbb:88e0/64","169.254.136.224/16","::1/128","127.0.0.1/8"],"kernel_version":"10.0.17134.1246 (WinBuild.160101.0800)","mac":["00:0c:29:a7:a5:36","f8:34:41:26:5e:7f"],"os":{"type":"windows","family":"windows","platform":"windows","name":"Windows 10 Pro","version":"10.0","major":10,"minor":0,"patch":0,"build":"17134.1246"},"timezone":"EST","timezone_offset_sec":-18000,"id":"a5617ab5-276a-4f90-86de-9b6f4d66aab8"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.840-0500","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1127},"message":"Process info","service.name":"winlogbeat","system_info":{"process":{"cwd":"C:\\Program Files\\winlogbeat","exe":"C:\\Program Files\\winlogbeat\\winlogbeat.exe","name":"winlogbeat.exe","pid":8940,"ppid":7872,"start_time":"2022-12-26T13:54:45.075-0500"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2022-12-26T13:54:49.840-0500","log.origin":{"file.name":"instance/beat.go","file.line":294},"message":"Setup Beat: winlogbeat; Version: 8.5.3","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-12-26T13:54:51.182-0500","log.logger":"publisher","log.origin":{"file.name":"pipeline/module.go","file.line":113},"message":"Beat name: IIS-01","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-12-26T13:54:51.183-0500","log.logger":"winlogbeat","log.origin":{"file.name":"beater/winlogbeat.go","file.line":69},"message":"State will be read from and persisted to C:\\Program Files\\winlogbeat\\data\\.winlogbeat.yml","service.name":"winlogbeat","ecs.version":"1.6.0"}

I have tried everything I know, knowing I don't know much.
Does anybody know why my winlogbeat cannot connect to the beats input in logstash? Logstash outputs the below, and I read through this forum but nothing.

[2023-01-03T18:42:40,246][INFO ][org.logstash.beats.BeatsHandler][main][a61c49a63f1c97fc05da854a5b582d5ef9c8b5e816092d6523e3bbb366a75338] [local: 192.168.0.185:5044, remote: 192.168.0.160:49731] Handling exception: java.net.SocketException: Connection reset (caused by: java.net.SocketException: Connection reset)
[2023-01-03T18:42:40,249][WARN ][io.netty.channel.DefaultChannelPipeline][main][a61c49a63f1c97fc05da854a5b582d5ef9c8b5e816092d6523e3bbb366a75338] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.net.SocketException: Connection reset
	at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
	at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at java.lang.Thread.run(Thread.java:833) [?:?]
[2023-01-03T18:42:40,258][INFO ][org.logstash.beats.BeatsHandler][main][a61c49a63f1c97fc05da854a5b582d5ef9c8b5e816092d6523e3bbb366a75338] [local: 192.168.0.185:5044, remote: 192.168.0.160:49730] Handling exception: java.net.SocketException: Connection reset (caused by: java.net.SocketException: Connection reset)
[2023-01-03T18:42:40,261][WARN ][io.netty.channel.DefaultChannelPipeline][main][a61c49a63f1c97fc05da854a5b582d5ef9c8b5e816092d6523e3bbb366a75338] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.net.SocketException: Connection reset
	at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:394) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:426) ~[?:?]
	at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:350) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
	at java.lang.Thread.run(Thread.java:833) [?:?]


There is not much in the logs to help troubleshoot the issue, your Logstash logs is throwing a Connection Reset error, this just means that the connection between your Winlogbeat and Logstash was closed by something.

Also, do you have any error in your Winlogbeat logs? The lines you shared before are not error lines, they are just info or warn lines, nothing useful to troubleshoot the issue.

Do you have anything else in Winlogbeat logs?

Thanks @leandrojmp for replying.
There's literally nothing much in the "C:\Program Files\winlogbeat\logs" except this:

{"log.level":"info","@timestamp":"2023-01-04T09:45:42.160-0500","log.origin":{"file.name":"instance/beat.go","file.line":708},"message":"Home path: [C:\\Program Files\\winlogbeat] Config path: [C:\\Program Files\\winlogbeat] Data path: [C:\\Program Files\\winlogbeat\\data] Logs path: [C:\\Program Files\\winlogbeat\\logs]","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-04T09:45:42.187-0500","log.origin":{"file.name":"instance/beat.go","file.line":716},"message":"Beat ID: 5fd9c499-6c20-4b6b-a7ff-9716b7108bd7","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-01-04T09:45:45.227-0500","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":81},"message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.","service.name":"winlogbeat","ecs.version":"1.6.0"}

Event log also only reports that winlogbeat has stopped.

I can also telnet to 5044 successfully.

I installed Java as well.

If the Windows Event Log reports that Winlogbeat has stopped then you need to investigate why the service was stopped, you need to look in the Envet Logs around the time of when Winlogbeat was stopped.

This align with the Connection Reset log you have in Logstash, it looks like that something in your system is killing your Winlogbeat processes or it is not working right.

I have installed Wireshark on the client to monitor the exchange to the ELK server. I see the following:

No       Time           Source          Destination     Protocol        Length  Info
744	15.445804	192.168.0.185	192.168.0.160	HTTP/JSON	660	HTTP/1.1 401 Unauthorized , JavaScript Object Notation (application/json)
745	15.464590	192.168.0.160	192.168.0.185	TCP	54	49706 → 5601 [RST, ACK] Seq=277 Ack=607 Win=0 Len=0

The connection shutdown right after the "HTTP/1.1 401 Unauthorized , JavaScript Object Notation (application/json)"

Any idea?

I should also mention that SSL is being used between logstash and elasticsearch, if that makes a difference:

input {
        beats {
                port => 5044
        }
}

output {
        stdout {
                codec => rubydebug
        }
        
        elasticsearch {
                hosts => ["https://192.168.0.185:9200"]
                index => "winevent-%{+yyyy.MM.dd}"
                cacert => '/home/doudou/elk/certs/http_ca.crt'
                user => "elastic"
                password =>"lVJNpEo2TpfUt*AiOux+"
                ssl => true
        }
}

As I said on the previous answer:

If the Windows Event Log reports that Winlogbeat has stopped then you need to investigate why the service was stopped, you need to look in the Event Logs around the time of when Winlogbeat was stopped.

You need to check if your Winlogbeat is running without any issue, if you have an Event Log saying that it was stopped, then you need to check the reason.

If your Winlogbeat is running without any issue, you will have a lot more logs in winlogbeat log, and if you have any connection between winlogbeat and logstash, it will show up in this Winlogbeat Logs.

On your Logstash side the only error you shared is the Connection Reset, which would make sense if your Winlogbeat is being stopped in your server.

Do you have any anti-virus or anything like that?

  • There is no Anti-virus on the client.
  • The ubuntu firewall ufw is inactive (ELK server).
  • Apology as I realize I didn't mention this. As soon as I start winlogbeat service, it stops within 2 seconds. winlogbeat test config and output are both successful but the service cannot start. The only way I got it to work was with the elasticsearch output.
  • I have looked at the event surrounding the error in the event log and nothing stands out.

So your winlogbeat service is not working, this is the main issue.

Check the documentation to see if you missed any steps, but there is not much that I can help without more information.

If a service is not starting, you need to look at the logs of the service and the windows event logs to try to understand what can be the issue.

I will revisit again. Thanks @leandrojmp

Looks like we're getting somewhere. There're actually two log locations for winlogbeat and the second one gives more information on what's going on. It looks like winlogbeat cannot retrieve stuffs from kibana api

{"log.level":"info","@timestamp":"2023-01-04T18:40:44.960-0500","log.logger":"kibana","log.origin":{"file.name":"kibana/client.go","file.line":179},"message":"Kibana url: http://192.168.0.185:5601","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-04T18:40:44.974-0500","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":194},"message":"Total metrics","service.name":"winlogbeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":62,"time":{"ms":62}},"total":{"ticks":77,"time":{"ms":77},"value":77},"user":{"ticks":15,"time":{"ms":15}}},"info":{"ephemeral_id":"11312a4e-b57b-488e-b736-2c3442531864","name":"winlogbeat","uptime":{"ms":7454},"version":"8.5.3"},"memstats":{"gc_next":9823816,"memory_alloc":7647840,"memory_sys":18508392,"memory_total":14267928,"rss":37384192},"runtime":{"goroutines":23}},"libbeat":{"config":{"module":{"running":0,"starts":0,"stops":0},"reloads":0,"scans":0},"output":{"events":{"acked":0,"active":0,"batches":0,"dropped":0,"duplicates":0,"failed":0,"toomany":0,"total":0},"read":{"bytes":0,"errors":0},"type":"logstash","write":{"bytes":0,"errors":0}},"pipeline":{"clients":0,"events":{"active":0,"dropped":0,"failed":0,"filtered":0,"published":0,"retry":0,"total":0},"queue":{"acked":0,"max_events":4096}}},"system":{"cpu":{"cores":2},"handles":{"open":211}}},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-01-04T18:40:44.974-0500","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":195},"message":"Uptime: 7.4551708s","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-04T18:40:44.974-0500","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":162},"message":"Stopping metrics logging.","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-01-04T18:40:44.974-0500","log.origin":{"file.name":"instance/beat.go","file.line":468},"message":"winlogbeat stopped.","service.name":"winlogbeat","ecs.version":"1.6.0"}


{"log.level":"error","@timestamp":"2023-01-04T18:40:44.976-0500","log.origin":{"file.name":"instance/beat.go","file.line":1057},"message":"Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://192.168.0.185:5601/api/status fails: Unauthorized: %!w(<nil>). Response: {\"statusCode\":401,\"error\":\"Unauthorized\",\"message\":\"Unauthorized\"}","service.name":"winlogbeat","ecs.version":"1.6.0"}

Then i tried the api from the browser (http://192.168.0.185:5601/api/status), and it gives the following.

{"name":"elk-001","uuid":"d08531fe-4557-461c-8f21-6cbcaf5e1ab8","version":{"number":"8.5.3","build_hash":"93852c98d9e9902fe166302fae10bc8c5f3502fb","build_number":57217,"build_snapshot":false},"status":{"overall":{"level":"available","summary":"All services are available"},"core":{"elasticsearch":{"level":"available","summary":"Elasticsearch is available","meta":{"warningNodes":[],"incompatibleNodes":[]}},"savedObjects":{"level":"available","summary":"SavedObjects service has completed migrations and is available","meta":{"migratedIndices":{"migrated":0,"skipped":0,"patched":2}}}},"plugins":{"licensing":{"level":"available","summary":"License fetched"},"banners":{"level":"available","summary":"All dependencies are available"},"features":{"level":"available","summary":"All dependencies are available"},"globalSearch":{"level":"available","summary":"All dependencies are available"},"mapsEms":{"level":"available","summary":"All dependencies are available"},"globalSearchProviders":{"level":"available","summary":"All dependencies are available"},"grokdebugger":{"level":"available","summary":"All dependencies are available"},"painlessLab":{"level":"available","summary":"All dependencies are available"},"searchprofiler":{"level":"available","summary":"All dependencies are available"},"uiActionsEnhanced":{"level":"available","summary":"All dependencies are available"},"embeddableEnhanced":{"level":"available","summary":"All dependencies are available"},"spaces":{"level":"available","summary":"All dependencies are available"},"urlDrilldown":{"level":"available","summary":"All dependencies are available"},"eventLog":{"level":"available","summary":"All dependencies are available"},"security":{"level":"available","summary":"All dependencies are available"},"cloud":{"level":"available","summary":"All dependencies are available"},"data":{"level":"available","summary":"All dependencies are available"},"encryptedSavedObjects":{"level":"available","summary":"All dependencies are available"},"files":{"level":"available","summary":"All dependencies are available"},"lists":{"level":"available","summary":"All dependencies are available"},"telemetry":{"level":"available","summary":"All dependencies are available"},"actions":{"level":"available","summary":"All dependencies are available"},"aiops":{"level":"available","summary":"All dependencies are available"},"dataViewEditor":{"level":"available","summary":"All dependencies are available"},"dataViewFieldEditor":{"level":"available","summary":"All dependencies are available"},"eventAnnotation":{"level":"available","summary":"All dependencies are available"},"fileUpload":{"level":"available","summary":"All dependencies are available"},"licenseManagement":{"level":"available","summary":"All dependencies are available"},"savedObjects":{"level":"available","summary":"All dependencies are available"},"savedSearch":{"level":"available","summary":"All dependencies are available"},"screenshotting":{"level":"available","summary":"All dependencies are available"},"snapshotRestore":{"level":"available","summary":"All dependencies are available"},"telemetryManagementSection":{"level":"available","summary":"All dependencies are available"},"unifiedFieldList":{"level":"available","summary":"All dependencies are available"},"unifiedSearch":{"level":"available","summary":"All dependencies are available"},"watcher":{"level":"available","summary":"All dependencies are available"},"ingestPipelines":{"level":"available","summary":"All dependencies are available"},"navigation":{"level":"available","summary":"All dependencies are available"},"presentationUtil":{"level":"available","summary":"All dependencies are available"},"reporting":{"level":"available","summary":"All dependencies are available"},"savedObjectsTaggingOss":{"level":"available","summary":"All dependencies are available"},"stackConnectors":{"level":"available","summary":"All dependencies are available"},"controls":{"level":"available","summary":"All dependencies are available"},"expressionError":{"level":"available","summary":"All dependencies are available"},"expressionImage":{"level":"available","summary":"All dependencies are available"},"expressionMetric":{"level":"available","summary":"All dependencies are available"},"expressionRepeatImage":{"level":"available","summary":"All dependencies are available"},"expressionRevealImage":{"level":"available","summary":"All dependencies are available"},"expressionShape":{"level":"available","summary":"All dependencies are available"},"graph":{"level":"available","summary":"All dependencies are available"},"kibanaOverview":{"level":"available","summary":"All dependencies are available"},"savedObjectsManagement":{"level":"available","summary":"All dependencies are available"},"savedObjectsTagging":{"level":"available","summary":"All dependencies are available"},"triggersActionsUi":{"level":"available","summary":"All dependencies are available"},"visualizations":{"level":"available","summary":"All dependencies are available"},"canvas":{"level":"available","summary":"All dependencies are available"},"dashboard":{"level":"available","summary":"All dependencies are available"},"dataViewManagement":{"level":"available","summary":"All dependencies are available"},"discover":{"level":"available","summary":"All dependencies are available"},"expressionGauge":{"level":"available","summary":"All dependencies are available"},"expressionHeatmap":{"level":"available","summary":"All dependencies are available"},"expressionLegacyMetricVis":{"level":"available","summary":"All dependencies are available"},"expressionMetricVis":{"level":"available","summary":"All dependencies are available"},"expressionPartitionVis":{"level":"available","summary":"All dependencies are available"},"expressionTagcloud":{"level":"available","summary":"All dependencies are available"},"expressionXY":{"level":"available","summary":"All dependencies are available"},"globalSearchBar":{"level":"available","summary":"All dependencies are available"},"ruleRegistry":{"level":"available","summary":"All dependencies are available"},"stackAlerts":{"level":"available","summary":"All dependencies are available"},"threatIntelligence":{"level":"available","summary":"All dependencies are available"},"transform":{"level":"available","summary":"All dependencies are available"},"visDefaultEditor":{"level":"available","summary":"All dependencies are available"},"visTypeHeatmap":{"level":"available","summary":"All dependencies are available"},"visTypeMarkdown":{"level":"available","summary":"All dependencies are available"},"visTypeMetric":{"level":"available","summary":"All dependencies are available"},"visTypeTable":{"level":"available","summary":"All dependencies are available"},"visTypeTagcloud":{"level":"available","summary":"All dependencies are available"},"visTypeTimelion":{"level":"available","summary":"All dependencies are available"},"visTypeTimeseries":{"level":"available","summary":"All dependencies are available"},"visTypeVega":{"level":"available","summary":"All dependencies are available"},"visTypeVislib":{"level":"available","summary":"All dependencies are available"},"visTypeXy":{"level":"available","summary":"All dependencies are available"},"dashboardEnhanced":{"level":"available","summary":"All dependencies are available"},"discoverEnhanced":{"level":"available","summary":"All dependencies are available"},"inputControlVis":{"level":"available","summary":"All dependencies are available"},"lens":{"level":"available","summary":"All dependencies are available"},"visTypeGauge":{"level":"available","summary":"All dependencies are available"},"visTypePie":{"level":"available","summary":"All dependencies are available"},"cases":{"level":"available","summary":"All dependencies are available"},"cloudSecurityPosture":{"level":"available","summary":"All dependencies are available"},"indexManagement":{"level":"available","summary":"All dependencies are available"},"maps":{"level":"available","summary":"All dependencies are available"},"dataVisualizer":{"level":"available","summary":"All dependencies are available"},"indexLifecycleManagement":{"level":"available","summary":"All dependencies are available"},"osquery":{"level":"available","summary":"All dependencies are available"},"remoteClusters":{"level":"available","summary":"All dependencies are available"},"rollup":{"level":"available","summary":"All dependencies are available"},"timelines":{"level":"available","summary":"All dependencies are available"},"crossClusterReplication":{"level":"available","summary":"All dependencies are available"},"ml":{"level":"available","summary":"All dependencies are available"},"observability":{"level":"available","summary":"All dependencies are available"},"sessionView":{"level":"available","summary":"All dependencies are available"},"infra":{"level":"available","summary":"All dependencies are available"},"kubernetesSecurity":{"level":"available","summary":"All dependencies are available"},"synthetics":{"level":"available","summary":"All dependencies are available"},"apm":{"level":"available","summary":"All dependencies are available"},"enterpriseSearch":{"level":"available","summary":"All dependencies are available"},"monitoring":{"level":"available","summary":"All dependencies are available"},"securitySolution":{"level":"available","summary":"All dependencies are available"},"upgradeAssistant":{"level":"available","summary":"All dependencies are available"},"logstash":{"level":"available","summary":"All dependencies are available"},"ux":{"level":"available","summary":"All dependencies are available"},"alerting":{"level":"available","summary":"Alerting is (probably) ready"},"fleet":{"level":"available","summary":"Fleet is available"},"bfetch":{"level":"available","summary":"All dependencies are available"},"customIntegrations":{"level":"available","summary":"All dependencies are available"},"esUiShared":{"level":"available","summary":"All dependencies are available"},"expressions":{"level":"available","summary":"All dependencies are available"},"fieldFormats":{"level":"available","summary":"All dependencies are available"},"guidedOnboarding":{"level":"available","summary":"All dependencies are available"},"kibanaReact":{"level":"available","summary":"All dependencies are available"},"kibanaUtils":{"level":"available","summary":"All dependencies are available"},"savedObjectsFinder":{"level":"available","summary":"All dependencies are available"},"screenshotMode":{"level":"available","summary":"All dependencies are available"},"share":{"level":"available","summary":"All dependencies are available"},"urlForwarding":{"level":"available","summary":"All dependencies are available"},"usageCollection":{"level":"available","summary":"All dependencies are available"},"licenseApiGuard":{"level":"available","summary":"All dependencies are available"},"monitoringCollection":{"level":"available","summary":"All dependencies are available"},"runtimeFields":{"level":"available","summary":"All dependencies are available"},"translations":{"level":"available","summary":"All dependencies are available"},"charts":{"level":"available","summary":"All dependencies are available"},"dataViews":{"level":"available","summary":"All dependencies are available"},"devTools":{"level":"available","summary":"All dependencies are available"},"inspector":{"level":"available","summary":"All dependencies are available"},"kibanaUsageCollection":{"level":"available","summary":"All dependencies are available"},"newsfeed":{"level":"available","summary":"All dependencies are available"},"telemetryCollectionManager":{"level":"available","summary":"All dependencies are available"},"home":{"level":"available","summary":"All dependencies are available"},"telemetryCollectionXpack":{"level":"available","summary":"All dependencies are available"},"uiActions":{"level":"available","summary":"All dependencies are available"},"console":{"level":"available","summary":"All dependencies are available"},"embeddable":{"level":"available","summary":"All dependencies are available"},"management":{"level":"available","summary":"All dependencies are available"},"advancedSettings":{"level":"available","summary":"All dependencies are available"},"taskManager":{"level":"available","summary":"All dependencies are available"}}},"metrics":{"last_updated":"2023-01-04T23:53:51.210Z","collection_interval_in_millis":5000,"os":{"platform":"linux","platformRelease":"linux-5.19.0-26-generic","load":{"1m":0.15,"5m":0.31,"15m":0.44},"memory":{"total_in_bytes":6250381312,"free_in_bytes":307503104,"used_in_bytes":5942878208},"uptime_in_millis":2136120,"distro":"Ubuntu","distroRelease":"Ubuntu-22.10"},"process":{"memory":{"heap":{"total_in_bytes":318672896,"used_in_bytes":250236696,"size_limit":2197815296},"resident_set_size_in_bytes":400228352},"pid":3673,"event_loop_delay":10.603045038297873,"event_loop_delay_histogram":{"min":9.281536,"max":25.591807,"mean":10.603045038297873,"exceeds":0,"stddev":1.1404033078346067,"fromTimestamp":"2023-01-04T23:53:46.204Z","lastUpdatedAt":"2023-01-04T23:53:51.200Z","percentiles":{"50":10.452991,"75":10.706943,"95":11.034623,"99":12.279807}},"uptime_in_millis":1053566.788182},"processes":[{"memory":{"heap":{"total_in_bytes":318672896,"used_in_bytes":250236696,"size_limit":2197815296},"resident_set_size_in_bytes":400228352},"pid":3673,"event_loop_delay":10.603045038297873,"event_loop_delay_histogram":{"min":9.281536,"max":25.591807,"mean":10.603045038297873,"exceeds":0,"stddev":1.1404033078346067,"fromTimestamp":"2023-01-04T23:53:46.204Z","lastUpdatedAt":"2023-01-04T23:53:51.200Z","percentiles":{"50":10.452991,"75":10.706943,"95":11.034623,"99":12.279807}},"uptime_in_millis":1053566.788182}],"response_times":{"avg_in_millis":0,"max_in_millis":0},"concurrent_connections":1,"requests":{"disconnects":0,"total":0,"statusCodes":{},"status_codes":{}}}}

Here's the kibana.yml
(I had server.host: 192.168.0.185 then changed it to "0.0.0.0" to the same fate )

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000


# This section was automatically generated during setup.
elasticsearch.hosts: ['https://192.168.0.185:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2NzI3ODMwMDMyODc6bDNlSVdRWUZSeUdPd3cwakZkRE9idw
elasticsearch.ssl.certificateAuthorities: [/home/doudou/Downloads/kibana-8.5.3/data/ca_1672783004733.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://192.168.0.185:9200'], ca_trusted_fingerprint: d3609f34f2718240c65ef52183efb58e39377e5b4438fcbd7bc69a0834e1872a}]