Logstash cannot connect to Elasticsearch

Logstash cannot connect to Elasticsearch

Logstash version is 8.12.2.
Elasticsearch version the same. JVM: 21.0.2

when I launch logstash i get this message:
Mar 27 16:42:07 elk.kaztoll.kz logstash[9567]: [2024-03-27T17:42:07,211][WARN ][logstash.outputs.elasticsearch][netflow] Attempted to resurrect connection to dead ES instance, but got an error

I only have one conf via filebeat module netflow
cat /etc/logstash/conf.d/netflow.conf

input {
  udp {
    port => 9995
    codec => netflow {
      versions => [5, 9]
    }
    type => netflow
  }
}

filter {
  if [type] == "netflow" {
    # Add any additional filters you need for processing Netflow data
  }
}

output {
  elasticsearch {
    hosts => ["10.0.125.7:9200"]
    user => "elastic"
    password => "mypassword"
    index => "netflow-%{+YYYY.MM.dd}"
  }
}

cat /etc/logstash/pipelines.yml

- pipeline.id: netflow
  path.config: "/etc/logstash/conf.d/netflow.conf"

cat /etc/filebeat/modules.d/netflow.yml

- module: netflow
  log:
    enabled: true
    var:
      netflow_host: 0.0.0.0
      netflow_port: 9995
      # internal_networks specifies which networks are considered internal or private
      # you can specify either a CIDR block or any of the special named ranges listed
      # at: https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html#condition-network
      internal_networks:
        - private

and finally, my elk conf
cat /etc/elasticsearch/elasticsearch.yml

# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 10.0.125.7
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#


#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 18-03-2024 04:31:30
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["elk.kaztoll.kz"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

What I did incorrect?

Welcome to the community!

elasticsearch {
hosts => ["10.0.125.7:9200"]

You hadn't set HTTPS on LS side. Try with:

output {
  elasticsearch {
    hosts => ["https://10.0.125.7:9200"]
    ssl_enabled=> true
    ssl_certificate_authorities => "/etc/logstash/certs/http_ca.crt"
    ssl_verification_mode => full 
    user => "elastic"
    password => "mypassword"
    index => "netflow-%{+YYYY.MM.dd}"
}
}

thank you, Rios.
unfortunately, i don't have certs in logstash directory. how can i get them?

Shoud be in /etc/elasticsearch/certs/ copy it or use ssl_keystore_path like in elasticsearch.yml.

now i getting

[2024-03-27T23:26:33,311][ERROR][logstash.outputs.elasticsearch] Invalid setting for elasticsearch output plugin:
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: output {
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: elasticsearch {
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: # This setting must be a path
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: # ["File does not exist or cannot be opened /etc/logstash/certs/http_ca.crt"]
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: ssl_certificate_authorities => "/etc/logstash/certs/http_ca.crt"
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: ...
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: }
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: }
Mar 27 22:26:33 elk.kaztoll.kz logstash[11438]: [2024-03-27T23:26:33,326][ERROR][logstash.agent ] Failed to execute action

Check file, is it there or /etc/elasticsearch/certs/

yes they are: I copied them from directory etc/elasticsearch/certs/ to directory etc/logstash/certs/

-rw-r-----. 1 root root 10029 Mar 27 22:24 http.p12
-rw-r-----. 1 root root 1915 Mar 27 22:24 http_ca.crt
-rw-r-----. 1 root root 5822 Mar 27 22:24 transport.p12

Those are only readable by root, and you should not be running logstash as root. Try chmod o+w /etc/logstash/certs/http_ca.crt.

1 Like

[netflow][820e969610120e1d735508f223eb6d9d9377dd77799f4dc8100930e799e83a5f] Starting UDP listener {:address=>"0.0.0.0:9995"}

Mar 28 09:25:39 elk.kaztoll.kz logstash[303382]: [2024-03-28T10:25:39,132][ERROR][logstash.inputs.udp ][netflow][820e969610120e1d735508f223eb6d9d9377dd77799f4dc8100930e799e83a5f] UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind(2) for "0.0.0.0" port 9995>

Something else is using port 9995. Try changing the port number and see if that allows startup.

Are you running two copies of logstash, one as a service, one on the command line?

Another option would be to change netflow_host: 0.0.0.0 to (for example) netflow_host: 127.34.4.19 (a loopback address that 0.0.0.0 will not cover), which will not get events moving, but may show whether there is a an address collision.

i tried use port 2055, 2056, 9995

when I launch logstash in command line, i stopped logstash service

now i changed the port and ip adress

- module: netflow
  log:
    enabled: true
    var:
      netflow_host: "127.34.4.19"
      netflow_port: 2055
      netflow_timeout: 300s

but i get the same messages in logs:

service logstash stop
Redirecting to /bin/systemctl stop logstash.service
[root@elk ~]# sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/netflow.conf 
Using bundled JDK: /usr/share/logstash/jdk
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2024-03-28 10:48:07.095 [main] runner - Starting Logstash {"logstash.version"=>"8.12.2", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-linux]"}
[INFO ] 2024-03-28 10:48:07.100 [main] runner - JVM bootstrap flags: [-XX:+HeapDumpOnOutOfMemoryError, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, -Djruby.regexp.interruptible=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11, -Dlog4j2.isThreadContextMapInheritable=true, -Xms1g, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Djdk.io.File.enableADS=true, -Dfile.encoding=UTF-8, --add-opens=java.base/java.io=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, -Djruby.compile.invokedynamic=true, -Xmx1g, -Djava.security.egd=file:/dev/urandom, -Djava.awt.headless=true, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED]
[INFO ] 2024-03-28 10:48:07.114 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[INFO ] 2024-03-28 10:48:07.114 [main] runner - Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[WARN ] 2024-03-28 10:48:08.135 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2024-03-28 10:48:11.841 [Converge PipelineAction::Create<main>] Reflections - Reflections took 302 ms to scan 1 urls, producing 132 keys and 468 values
[INFO ] 2024-03-28 10:48:11.844 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/bindata-2.4.15/lib/bindata/base.rb:80: warning: previous definition of initialize was here
[INFO ] 2024-03-28 10:48:13.797 [Converge PipelineAction::Create<main>] javapipeline - Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[INFO ] 2024-03-28 10:48:13.804 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.0.125.7:9200"]}
[INFO ] 2024-03-28 10:48:14.654 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@10.0.125.7:9200/]}}
[INFO ] 2024-03-28 10:48:14.825 [[main]-pipeline-manager] elasticsearch - Failed to perform request {:message=>"Connect to 10.0.125.7:9200 [/10.0.125.7] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to 10.0.125.7:9200 [/10.0.125.7] failed: Connection refused>}
[WARN ] 2024-03-28 10:48:14.825 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@10.0.125.7:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://10.0.125.7:9200/][Manticore::SocketException] Connect to 10.0.125.7:9200 [/10.0.125.7] failed: Connection refused"}
[INFO ] 2024-03-28 10:48:14.832 [[main]-pipeline-manager] elasticsearch - Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"netflow-%{+YYYY.MM.dd}"}
[INFO ] 2024-03-28 10:48:14.832 [[main]-pipeline-manager] elasticsearch - Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[INFO ] 2024-03-28 10:48:14.842 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/netflow.conf"], :thread=>"#<Thread:0x34265544 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[INFO ] 2024-03-28 10:48:18.457 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>3.61}
[INFO ] 2024-03-28 10:48:18.460 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2024-03-28 10:48:18.465 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:2055"}
[ERROR] 2024-03-28 10:48:18.489 [[main]<udp] udp - UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind(2) for "0.0.0.0" port 2055>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:201:in `bind'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-udp-3.5.0/lib/logstash/inputs/udp.rb:129:in `udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-udp-3.5.0/lib/logstash/inputs/udp.rb:81:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:414:in `inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:405:in `block in start_input'"]}
[INFO ] 2024-03-28 10:48:18.542 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2024-03-28 10:48:19.832 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-output-elasticsearch-11.22.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:235] elasticsearch - Failed to perform request {:message=>"Connect to 10.0.125.7:9200 [/10.0.125.7] failed: Connection refused", :exception=>Manticore::SocketException, :cause=>#<Java::OrgApacheHttpConn::HttpHostConnectException: Connect to 10.0.125.7:9200 [/10.0.125.7] failed: Connection refused>}

It looks like you are configuring both filebeat and logstash to listen on port 2055. Just change one thing at a time. Change the port that logstash listens on, even if nothing writes to it. If that starts up then you need to think about how to get data from filebeat to logstash.

1 Like

Thank you, Badger.
now, logstash logs looks like

Mar 28 10:43:31 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:31,626][INFO ][logstash.javapipeline    ][netflow] Starting pipeline {:pipeline_id=>"netflow", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/netflow.conf"], :thread=>"#<Thread:0x54fded07 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
Mar 28 10:43:32 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:32,037][INFO ][logstash.javapipeline    ][netflow] Pipeline Java execution initialization time {"seconds"=>0.41}
Mar 28 10:43:32 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:32,041][INFO ][logstash.javapipeline    ][netflow] Pipeline started {"pipeline.id"=>"netflow"}
Mar 28 10:43:32 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:32,047][INFO ][logstash.inputs.udp      ][netflow][7613e419cf891ecc6aba3c536506b5df34b4b816724a57fcb4214032987012b5] Starting UDP listener {:address=>"0.0.0.0:2055"}
Mar 28 10:43:32 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:32,052][INFO ][logstash.inputs.udp      ][netflow][7613e419cf891ecc6aba3c536506b5df34b4b816724a57fcb4214032987012b5] UDP listener started {:address=>"0.0.0.0:2055", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
Mar 28 10:43:32 elk.kaztoll.kz logstash[2537]: [2024-03-28T11:43:32,052][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:netflow], :non_running_pipelines=>[]}

so, I have logstash config cat /etc/logstash/conf.d/netflow.conf

input {
  udp {
    port => 2055
    codec => netflow {
      versions => [5, 9]
    }
    type => netflow
  }
}

filter {
  if [type] == "netflow" {
    # Add any additional filters you need for processing Netflow data
  }
}

output {
  elasticsearch {
    hosts => ["10.0.125.7:9200"]
    user => "elastic"
    password => "mypassword"
    ssl_enabled=> true
    ssl_certificate_authorities => "/etc/logstash/certs/http_ca.crt"
    ssl_verification_mode => full     
    index => "netflow-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    manage_template => false
  }
}

and filebeat conf. cat /etc/filebeat/modules.d/netflow.yml

- module: netflow
  log:
    enabled: true
    var:
      netflow_host: "0.0.0.0"
      netflow_port: 9995
      netflow_timeout: 300s

telnet 10.0.125.7 9995 and 10.0.125.7 2055 from mikrotik doesn't work, firewall ports are open. how I can get data from filebeat to logstash?

You cannot use telnet for UDP, rather use nc, tcpdump or nmap.