Endpoint Security without using Fleet

Hi,

Is it possible to use the Elastic Endpoint Security integration without utilising Fleet?

We have a use case to protect endpoints with the Endpoint Security that has restricted access to the Elastic stack.
Ideally, we'd deploy a Logstash server that collects all logs within the environment and ships these via middleware to Elasticsearch.

I was hoping to use the Logstash output feature within the Elastic Agent configuration using the standalone agent option.

When I start the elastic-agent, I am able to see filebeat and metricbeat logs be ingested into the data streams but nothing for the Elastic Endpoint.

Alternatively - if the above is not possible - can you setup Fleet on the endpoint then reconfigure it to use a Logstash output (as per above link)? This would avoid us having to expose Elasticsearch (tcp/9200) to all agents and only require the fleet port (tcp/8220) to be exposed.

Any assistance would be appreciated.

Cheers,
Kev

Hi, that's a great question. I don't know the answer off-hand. In fact I thought that Fleet was meant to be the only option going forward.

However to give you a quick hint, your observation is correct as Elastic Endpoint doesn't send data through Elastic Agent. Elastic Endpoint run as a standalone service talking directly to Elasticsearch.

Oh, I'm actually learning with you :slightly_smiling_face: The Logstash output is a unique feature of Elastic Agent. it's not present in Elastic Endpoint, there's no quick toggle to make Endpoint send data to Logstash.

Cheers,

Endpoint Security must be ran using Fleet. You can create a new policy in Fleet that only has the Endpoint Security integration that way you do not get all the logs and metrics from the system integration if that is what your preferred setup would be.

As for communication with logstash, at the moment we only support Elasticsearch and the Endpoint Security needs to be able to communicate directly with Elasticsearch. With full TLS enabled and Fleet each Endpoint Security gets a unique API key that is scoped to only the permissions it needs in Elasticsearch to write its documents for that Agent ID. This makes it very secure to expose Elasticsearch directly to the Agents.

@blaker Thanks for confirming.

Is there a way to configure the elastic-agent (and/or ElasticEndpoint service) with some proxy configuration?

Some agents have one-to-one NATs and I'd much rather proxy all requests to Elasticsearch (and fleet) via the one IP address, thus allowing us to restrict access and avoid opening Elasticsearch to multiple IPs.

I did come across this issue but not sure if its fully implemented as can't find much documentation on it.

Any assistance would be appreciated.

Cheers,
Kev

@KevSex All communication between Elastic Agent and Fleet Server and the data connects between Elastic Agent (and Endpoint Security) go through an HTTP protocol. So you could setup a HTTP reverse proxy and point Elastic Agent at the proxy instead of directly at Fleet Server and Elasticsearch.

Hey Blake,

Thanks for your advice. I've tested this further by using some reverse proxy config using nginx but wondering if this will still work as intended.
If I configure the Elastic Agent at the proxy instead of directly at Fleet Server and Elasticsearch, this is fine from an enrollment pov and I can see the agent show under the Fleet > Agents tab.

However, as the 'Fleet settings' (Fleet Server hosts and Elasticsearch hosts) are sent to the device to configure Elastic Endpoint, it then overwrites the proxy configuration therefore attempting to communicate directly with Elasticsearch/Fleet. This causes the agents to have a status of Unhealthy and not able to send their logs.

Is there an alternative to working around this or can this in fact not be achieved using reverse proxy?

Thanks,
Kev

It is possible to use reverse proxy as long as the Elastic Agent itself can also communicate through the reverse proxy as well.

@blaker

How would one achieve this?

My current setup is as follows:

  • Self-hosted
  • 3-node Elasticsearch cluster (7.15.0)
  • Kibana (7.15.0)

Fleet server URL: https://kibana-dev.mydomain.com:8220
Elasticsearch URL: https://kibana-dev.mydomain.com:9220

A separate hosted environment consisting of two servers:

  • Server-A (Static NAT with ACLs permitting tcp/9200 and tcp/8220 to above URLs)
  • Server-B (Dynamic PAT with no access to above URLs)

A Demo policy has been configured with integration for:

  • Elastic Endpoint Security
  • System

Server-A has been enrolled successfully to the above policy and can see this under both, Fleet > Agents and Security > Endpoints. The agent statuses show as 'Healthy'.
Server-A also has reverse proxy configuration in place for the above mentioned ports and I am able to hit these from Server-B.

When I attempt to enroll Server-B using below install command, the enrollment is successful:

sudo ./elastic-agent install -f --url=https://Server-A-internal-IP:8220 --enrollment-token=xxxxxxx

2021-09-24T07:28:53.704+0100    INFO    cmd/enroll_cmd.go:414   Starting enrollment to URL: https://192.168.15.1:8220/
2021-09-24T07:28:55.227+0100    INFO    cmd/enroll_cmd.go:252   Successfully triggered restart on running Elastic Agent.
Successfully enrolled the Elastic Agent.
Elastic Agent has been successfully installed.

After a couple of minutes, the status of Server-B changes from Healthy to Unhealthy and looking in the logs, I see this attempting to connect to the non-proxy URL which it cannot access (hence the reverse proxy config).

{
  "log.level":"error",
  "@timestamp":"2021-09-24T06:31:42.029Z",
  "log.origin":{
    "file.name":"fleet/fleet_gateway.go",
    "file.line":180
    },
  "message":"failed to dispatch actions, error: fail to communicate with updated API client hosts: Get \"https://kibana-dev.mydomain.com:8220/api/status?\": context deadline exceeded",
  "ecs.version":"1.6.0"
}

Am I doing something wrong or is this not achievable?

Appreciate your input.

Kev