The error is occurring when attempting to start the monitoring webserver, but unfortunately the error message doesn't indicate on which interface and/or IP/port it is attempting, only that the "name or service not known".
In your settings file, are http.port or http.host set, and if so, what are their values?
I've setted the field http.host first with the value for the Elasticsearch node given inside of my created cluster in the Elastic Cloud. I've replaced a part of the hole address by hash2 and ipaddress. The rest is ip.es.io that comes before the port number.
But I am starting Logstash from my local machine. Then I've changed the value of the field http.host to localhost. I think it makes more sense like this. Am I right?
BUT I get another problem:
In Logstash, the http.host variable determines where the Logstash monitoring API is mounted; it is 127.0.0.1 by default (the local loopback interface, intentionally not exposed to the network for security reasons: the endpoints do not authenticate the calling user or perform authorization validation), and you shouldn't need to change it.
Does your local Logstash have a route to your Elastic Cloud Elasticsearch, or are you attempting to use an address that isn't exposed to the wider network? Can you use the curl command-line utility to connect to the healthcheck endpoint?
The error indicates that it could not build a certificate chain to verify the certificate being returned by Elasticsearch; by default, Logstash will refuse to connect to an Elasticsearch that it cannot validate.
Do you have the appropriate certificates available on your Logstash host and referenced in your pipeline configuration for the Elasticsearch Output Plugin's cacert directive?
You can temporarily disable validation of the certificate chain by specifying ssl_certificate_verification => false (WARNING: connecting to hosts without verifying their identity is a security risk)
I've used the certificate we got for the Elastic Cloud Enterprise. We got two crt certificates: cluidui.crt and proxy.crt. I've tried both in the field cacert and none of them has worked.
I checked again how to connect my Logstash with ECE and I found the following page: https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-id.html#_before_you_begin_9
which says:
To use the Cloud ID, you need:
Beats or Logstash version 6.o or later, installed locally wherever you want to send data from.
An Elasticsearch cluster on version 5.x or later to send data to.
...
I was using Logstash image 5.6.8 and Elasticsearch 5.x. Then I changed Logstash 5.6.8 to 6.1.4 and Elasticsearch to 6.1.3 (within ECE Cluster).
And I get the following error now:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.