Problems connecting an ec2 instances with logstash on to the elastic cloud

Hello there,

In need of some help, have newly come around the realm of AWS and the Elastic Cloud set up(s).
Have recently set up and configured an Logstash agent in an ec2 instance. On the output of this, I would like this to be able to output to the Elastic Cloud set up so the data is able to be stored in the Eleasticsearch database and therefore seen in the Kibana console.

I can curl to the elastic cloud instances from the ec2 instance within AWS but cannot output using Logstash on the ec2 instances.

For the Eleasticsearch in the Elastic Cloud, this does come with X-Pack on deployment as the security features, centralized pipeline management etc is seen under the management area within Kibana.

For the config of the Logstash agent which is built on the ec2 instance. Have tried both using the hostname, user and password configs and have also attempted with the config of using the cloud.id and cloud.auth.

As I can curl from the cli on the logstash agent to the elastic cloud and get the elasticsearch response from it. Would this probably no be a port allocation issue? or would it when going via logstash as when looking have seen this comment being made

if it is using cloud.id for monitoring then that's the problem because it gives the wrong URL in ECE (it's designed for ESS) as you can see by base64 decoding it .. it uses :443 as the port not :9243 (which is the ECE default). So To make applications play nice with cloud.id in ECE you have to use (eg) iptables to map 443 to 9243

Within the Logstash.yml from the xpack setting the certs needed will need to be added when/ if the ssl=true?

When the error of

LogStash::LicenseChecker::LicenseError: Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach live Elasticsearch cluster

This means that there needs to be a manual way to set X-Pack correctly on Elastic Cloud? or would this be Logstash? If on Logstash, with the certs from elasticsearch from the elastic cloud how would you be able to do this/ get this?

If in turn, the would need be, to create a logstash instances within the elastic cloud environment. Then on the ec2 instance output logstash to the elastic cloud logstash which will then internally push it to elasticsearch?

Thank you!

As you point out, the port in the cloud.id defaults to 443. It can be changed - see this thread: "ECE + AWS load balancer: port configuration? - #2 by Alex_Piggott"

LogStash::LicenseChecker::LicenseError: Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach live Elasticsearch cluster

This is almost always nothing to do with xpack/licenses! It just means that whatever comms configuration you have is failing to hit an alive Elasticsearch service

If it's still failing once you've fixed the port and switched back to cloud.id then post snippets of your logstash config and we'll take a look

Alex

Hey @Alex_Piggott,
Thank you very much for your help in this.

With looking and trying the referenced post in regards to the API calling. I am still unable to connect via logstash.

When running the referenced API call from the the below error is displayed:

{"error":"no handler found for uri [/api/v1/platform/configuration/store/https_port] and method [POST]"}

Does this mean that this needs to be created? if so how? or am I just having a slow moment.

On the separate ec2 instance settings for the SG's that are included in the building. It allows ports such as 9243 and 443.

The logstash output conf on the remote EC2 instance is set up like the below (have tried this with both ports to test and my own mind on 9243 and 443).

output {
elasticsearch {
hosts => "https://XXXXXXXXXXXXXXXXXXX.eu-west-2.aws.cloud.es.io:9243"
user => "username is here"
password => "the password goes here"
index => "my_index"
}
}

I am still able to curl commands from the ec2 instances to read listings of indices, see policies etc.

I am probably missing something mundane right? :upside_down_face:

Thank you!

Oh I'm sorry, I misunderstood .... I thought you were trying to connect to a cluster hosted in ECE (which is our virtual cloud infrastructure that customers download and use to build their own clouds) ... but actually you're trying to connect to ESS (our hosted cloud)

(This forum is technically for ECE-not-ESS, though obviously there's often a lot of overlap since they are powered by the same software)

So ignore the 443/9243 stuff - ESS is open on both ports, and cloud.id is correct for ESS

OK so coming back to what's going on ... you're saying that if XXX is your cluster id, then

curl -u "$USER:$PASS" 'https://XXX.eu-west-2.aws.cloud.es.io:9243'

works, but (from the same box)

output {
elasticsearch {
hosts => "https://XXX.eu-west-2.aws.cloud.es.io:9243"
user => $USER
password => $PASS
index => "my_index"
}
}

fails with LogStash::LicenseChecker::LicenseError: Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach live Elasticsearch cluster?

Is this the problem? Enabling X-Pack in Logstash 7 gives me this error

Do you have a field called xpack.monitoring.elasticsearch.hosts enabled anywhere? (or something like that?)

Hey @Alex_Piggott,
Yeah I noticed that when I posted this that I put the ECE tag on it. Now I couldn't edit it or change it without deleting it :see_no_evil:

Yep, this it what I have done so far but I keep coming back on seeing error logs. Like below when the referenced has been added to the logstash.yml.

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: username is here
xpack.monitoring.elasticsearch.password: password is here
xpack.monitoring.elasticsearch.hosts: ["https://XXXXXX.eu-west-2.aws.cloud.es.io:9243"]

With the output conf also being the same as before.

There below messages are seen when I am looking through the Logstash journal/ log.

[INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[https://username is here:xxxxxx@XXXXXX.eu-west-2.aws.cloud.es.io:9243/]}}
[DEBUG][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://username is here:xxxxxx@XXXXXX.eu-west-2.aws.cloud.es.io:9243/, :path=>"/"}
[WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://username is here:xxxxxx@XXXXXX.eu-west-2.aws.cloud.es.io:9243/"}
[INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["https://XXXXXX.eu-west-2.aws.cloud.es.io:9243"]}

[ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

Then with the below message popping up. I am unsure where this is even coming from as I believe it is creating/ generating this via the loading of the Elasticsearch output plugin?

----------------------P[input-metrics{"collection_interval"=>10, "collection_timeout_interval"=>600, "extended_performance_collection"=>"true", "config_collection"=>"true"}|[x-pack-metrics]internal_pipeline_source:6:3:```
metrics {
collection_interval => 10
collection_timeout_interval => 600
extended_performance_collection => true
config_collection => true
}
] -> __QUEUE__ __QUEUE__ -> P[output-elasticsearch{"hosts"=>["https://XXX.eu-west-2.aws.cloud.es.io:9243"], "bulk_path"=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", "manage_template"=>"false", "document_type"=>"%{[@metadata][document_type]}", "index"=>"", "sniffing"=>"false", "user"=>"username is here", "password"=>"password is here"}|[x-pack-metrics]internal_pipeline_source:14:3:
elasticsearch {
hosts => ["https://XXX.eu-west-2.aws.cloud.es.io:9243"]
bulk_path => "/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s"
manage_template => false
document_type => "%{[@metadata][document_type]}"
index => ""
sniffing => false
user => "username is here"
password => "password is here"
# In the case where the user does not want SSL we don't set ssl => false
# the reason being that the user can still turn ssl on by using https in their URL
# This causes the ES output to throw an error due to conflicting messages
}
``]

Would there be setting needed/ added or changed within the elasticsearch.yml on the Elastic Cloud build of the elasticsearch? which would need to be added in and the redeployed?

And to check, if you set xpack.monitoring.enabled: false does it then all work?

(I don't believe there is anything that needs to be done to ES config to get xpack monitoring to work, and there is no elastic cloud logstash ... you should be hitting the cluster directly so I'm surprised it doesn't work, unless there's a tricky mistake hidden in the config somewhere)

Hey @Alex_Piggott,

I believe it is working now and am able to see events coming in via Kibana!

In logstash.yml I had commented out all the reference to X-Pack (i.e. monitoring and management).

The things that have been changed within the logstash output .conf file are below:

output {
elasticsearch {
hosts => ["https://XXXXXXX.eu-west-2.aws.cloud.es.io:9243"]
user => "user goes here"
password => "password goes here"
action => "create"
index => "testing-output"
document_type => "tesinglogs"
}
}

With this it seems to work and am now able to see data flood into the Elastic Cloud.

Am going to keep testing this also, regards the monitoring and management X-Pack features in the logstash.yml at a later date. Will also see what happens when using the cloud_id and cloud_auth in the logstash conf also.

Thank you very much for your help @Alex_Piggott!!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.