Logstash cannot reach Elasticsearch / filebeat security

Hello,
I am encountering some Problems while trying to secure Filebeat&Logstash.

Scenario:
I created my own RootCA and signed certificates with it. My Kibana-Webserver for example is signed with this Certificate, but I also tried to sign it with the elastic-stack-ca.p12

After completing this Tutorial: Secure communication with Logstash | Filebeat Reference [7.14] | Elastic

My Logstash Service wont start with : logstash --setup. If I just start Logstash with systemctl, the Port does not open.. And i need the port to open for filebeat to work

Here is the output of logstash --setup:

[INFO ] 2021-09-21 15:24:24.482 [main] runner - Starting Logstash {"logstash.version"=>"7.14.1", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[INFO ] 2021-09-21 15:24:25.551 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-09-21 15:24:25.828 [Converge PipelineAction::Create<main>] Reflections - Reflections took 47 ms to scan 1 urls, producing 120 keys and 417 values 
[WARN ] 2021-09-21 15:24:26.399 [Converge PipelineAction::Create<main>] beats - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2021-09-21 15:24:26.447 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2021-09-21 15:24:26.487 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://IP:9200"]}
[INFO ] 2021-09-21 15:24:26.672 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://IP:9200/]}}
[WARN ] 2021-09-21 15:24:26.826 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://IP:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://IP:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}
[INFO ] 2021-09-21 15:24:26.879 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/sample.conf"], :thread=>"#<Thread:0x3318e003 run>"}
[INFO ] 2021-09-21 15:24:27.396 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.52}
[INFO ] 2021-09-21 15:24:27.422 [[main]-pipeline-manager] beats - Starting input listener {:address=>"IP:5044"}
[INFO ] 2021-09-21 15:24:27.635 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2021-09-21 15:24:27.733 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2021-09-21 15:24:27.747 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[WARN ] 2021-09-21 15:24:31.869 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://IP:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://IP:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"}

logstash.yml is empty, no port or host defined but here is the logstash.conf:

input {
  beats {
    host => "IP"
    port => 5044
    ssl => true
    ssl_certificate_authorities => ["<path to ca1>","<path to ca2>","<path to ca3>"]
    ssl_certificate => "logstash.crt"
    ssl_key => "logstash.key"
    ssl_verify_mode => "peer"
  }
}



output {
  elasticsearch {
    hosts => ["https://IP:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "kibana"
    #password => "pass"
  }
}

Please let me know if you need more Information.

For all your file paths have you tried using the /full/path/to/the/file.crt? /

The error you are getting is unable to find valid certification path to requested target which could indicate it's looking for files in the wrong directory.

1 Like

Yes, I actually have written this into the .conf but I just replaced it. I also tried to:

ls <path>

to make sure that I do not have any typos..
Do I just have to add the path into the .conf? Or could it be possible that filebeat or another service (check) fails at this point?

I maybe need to note that these are self-signed certificates.

@smam Have you tested it without the certificates first? if that works then you can proceed with troubleshooting the certificate path issue, validating the directory, permissions etc....

Yes, but 1.) It shows the same error. 2.) After finishing the HTTPS Setup is was noted that you cannot communicate with Elasticsearch over HTTP anymore. So I think it cannot work, or am I wrong?

I am confused about logstash printing the same error after commenting out the certificate paths. Does it expect paths in the logstash.yml? Because this file is almost empty, as mentioned above.

(Only contains path.data and path.logs)

I have created one certificate per Application. ( Elasticsearch/Kibana/Logstash )
Is this neccessary or could I just use the same Certificate for each of these services?

Example Input for Logstash using SSL

input {
    beats {
        port => "5044"
        ssl  => true
        ssl_certificate => "/etc/ssl/certs/certificate.crt"
        ssl_key => "/etc/ssl/certs/server.key"
        ssl_certificate_authorities => "/etc/ssl/certs/CA.pem"
    }
}

the same certificates should be under the Logstash Output section of whatever beat you are using to send data to Logstash

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.0.7:5044","192.168.0.8:5044"]
  loadbalance: true

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/ssl/certs/CA.pem"]

  # Certificate for SSL client authentication
  ssl.certificate: "/etc/ssl/certs/certificate.crt"

  # Client Certificate Key
  ssl.key: "/etc/ssl/certs/server.key"```

You don't need separate certificates for each application. All mine share the same.

Ok, if I don't put a output-section into the .conf file i dont get errors BUT the startup takes forever, which should not be right. Should it?

If I add a output I come back to the same error..
I tested every certificate path and verified all certificates. Is there a smart way to make the output work?

Elasticsearch runs on port 9200 and has two certificates in its .yml->inner node communication and one for browser-client verification. Kibana has one certificate.

Logstash has one certificate which is used in filebeat.yml and logstash.yml.

I read that the the error occurs for self signed certificates, meaning I can only access it if I add my root CA to the ca-bundle.

Is Kibana able to access Elasticsearch using the provided certificates ? Kibana -> Elasticsearch

Yes, I atleast do not get errors. The Logs are fine..