How to create a logstash instance in Elastic Cloud

I am a bit lost in how to create a logstash instance in the EC. I've changed the following fields within logstash.yml file:

cloud.id: <clustername>:<hash1>
cloud.auth: <elastic>:<password>
http.host: "<hash2.ipadress.ip.es.io>"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["https://hash2.ipaddress.ip.es.io:port"]
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: password

and I've tipped in the elasticsearch output plugin of the logstash config file:

elasticsearch {
   codec => json
   manage_template => false
   hosts => [ "https://hash2.ipaddress.ip.es.io:port" ]
   user => "elastic"
   password => "password"
   index => "%{[@metadata][endpoint]}"
   document_type => "ObjectEvent"
   document_id => "%{epc}"	  
}

But I've gotten the following error:

Can anyone help me with the logstash configuration in the EC?

The error is occurring when attempting to start the monitoring webserver, but unfortunately the error message doesn't indicate on which interface and/or IP/port it is attempting, only that the "name or service not known".

  • In your settings file, are http.port or http.host set, and if so, what are their values?

I've setted the field http.host first with the value for the Elasticsearch node given inside of my created cluster in the Elastic Cloud. I've replaced a part of the hole address by hash2 and ipaddress. The rest is ip.es.io that comes before the port number.
But I am starting Logstash from my local machine. Then I've changed the value of the field http.host to localhost. I think it makes more sense like this. Am I right?
BUT I get another problem:

I am thinking that maybe the https is the issue here... do you have any idea to solve the problem?

In Logstash, the http.host variable determines where the Logstash monitoring API is mounted; it is 127.0.0.1 by default (the local loopback interface, intentionally not exposed to the network for security reasons: the endpoints do not authenticate the calling user or perform authorization validation), and you shouldn't need to change it.

Does your local Logstash have a route to your Elastic Cloud Elasticsearch, or are you attempting to use an address that isn't exposed to the wider network? Can you use the curl command-line utility to connect to the healthcheck endpoint?

curl --user cardoso_fep_user https://ADDRESS:PORT/_cat/health?v

I am trying to connect my local Logstash with the Elastic Cloud Elasticsearch.

I've logged in with the user cardoso_fep_user and tipped the following line into the browser:

https://e95444bf74974e44b8c8011d48c96f25.rb-elasticsearch.de.bosch.com:9243/

and it connects:

{
  "name" : "instance-0000000001",
  "cluster_name" : "<clustername>",
  "cluster_uuid" : "<clusteruuid>",
  "version" : {
    "number" : "6.1.3",
    "build_hash" : "<...>",
    "build_date" : "2018-01-26T18:22:55.523Z",
    "build_snapshot" : false,
    "lucene_version" : "7.1.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

But my local Logstash does not connect to it...

I am getting this now:

The error indicates that it could not build a certificate chain to verify the certificate being returned by Elasticsearch; by default, Logstash will refuse to connect to an Elasticsearch that it cannot validate.

Do you have the appropriate certificates available on your Logstash host and referenced in your pipeline configuration for the Elasticsearch Output Plugin's cacert directive?

You can temporarily disable validation of the certificate chain by specifying ssl_certificate_verification => false (WARNING: connecting to hosts without verifying their identity is a security risk)

I've used the certificate we got for the Elastic Cloud Enterprise. We got two crt certificates: cluidui.crt and proxy.crt. I've tried both in the field cacert and none of them has worked.
I checked again how to connect my Logstash with ECE and I found the following page:
https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-id.html#_before_you_begin_9
which says:

To use the Cloud ID, you need:

  • Beats or Logstash version 6.o or later, installed locally wherever you want to send data from.
  • An Elasticsearch cluster on version 5.x or later to send data to.
  • ...

I was using Logstash image 5.6.8 and Elasticsearch 5.x. Then I changed Logstash 5.6.8 to 6.1.4 and Elasticsearch to 6.1.3 (within ECE Cluster).
And I get the following error now:

I don't know if I am using the wrong certificate. Do you know which one I should use...

How do I know which certificate should be set to the cacert field?
Is this from ECE or from Logstash host?

My Logstash host has no certificate.

Am I mounting and starting a new logstash container wrong?

docker run -d -v /logstash/pipeline_cloudui/fep/:/usr/share/logstash/pipeline/ -v /logstash/config_cloudui/:/usr/share/logstash/config/ -v /logstash/config_cloudui/certs/:/usr/share/logstash/config/certs/ --name cardoso_cloudui -p 9600:9600 --net host rb-dtr.de.bosch.com/elastic.co/logstash:6.1.4

Can anyone help me please? I really don't know what I am doing wrong...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.