Logstash Monitoring and AWS PrivateLink (Elastic Cloud)

I have an Elastic Cloud instance, set to deploy in AWS (v7.13.x). Additionally, I have Logstash (v7.13.4) running within an AWS account, which is in the same region as the Elastic Cloud instance.

In order to get Logstash to post data across to Elastic Cloud privately I have utilised an AWS PrivateLink. I know this link works as I can curl it from my AWS VPC where Logstash runs and receive back the / endpoint result (You know, for search!). Additionally, I can run a dummy pipeline from Logstash to Elasticsearch and the data arrives in Elasticsearch.

My problem is that I cannot seem to get Logstash to show up on the Kibana monitoring page.
I can at least confirm the following:

  • xpack.monitoring.collection.enabled is set on Elasticsearch as persistent, and the Stack Monitoring page has suddenly activated and now reports Elasticsearch, Kibana and APM+Fleet
  • No matter what I do, Logstash seems to never appear on the monitoring page
  • Logstash generally can write into Elasticsearch data indexes, which I have tested with a dummy pipeline
  • There are no monitoring indexes for Logstash in my Elasticsearch instance
    • I can see .monitoring-beats-7-<date>, .monitoring-es-7-<date> and .monitoring-kibana-7-<date> indexes but no Logstash one
    • I presumed this was due to lack of permissions, but I have since given the user Logstash is running as the superuser role, and it still does not appear to happen.

I am stumped. Can anyone please advise? I know that the AWS PrivateLink-Elastic Cloud link can only be initiated one way over TCP, and was wondering if the monitoring connection actually runs Elasticsearch->Logstash and is being blocked?

I do not see anything obvious in my logs (which I can post, but I am not sure what I am looking for). Can anyone advise?

Posting some logs might prove useful :slight_smile:

So, I have (accidentally) just found what the issue is. Ultimately I guess it's user error (by me), but I'll write up what's happened anyway for readers:

Essentially, I was previously setting xpack.monitoring.elasticsearch.hosts (and related properties and all my other logstash.yml properties) via environment variables literally by the property name. Given that I am using Logstash in Docker, this seemed OK - the docs say that when on Docker you can set logstash.yml properties via environment variables. I took this to mean that I should simply set environment variables with names literally matching the Logstash properties (i.e. an environment variable with name xpack.monitoring.elasticsearch.hosts) - but apparently this is not the case.

As described on the Logstash Docker Config page it states:

Under Docker, Logstash settings can be configured via environment variables. When the container starts, a helper process checks the environment for variables that can be mapped to Logstash settings. Settings that are found in the environment are merged into logstash.yml as the container starts up.

For compatibility with container orchestration systems, these environment variables are written in all capitals, with underscores as word separators

Just as it says - this means that properties like pipeline.workers should be set in the environment like PIPELINE_WORKERS. Somehow I had misunderstood earlier text and different text in a different page.

1 Like

Thanks for sharing your solution!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.