Elastic Endpoint Security missing host

Hi,

I am running ELK v7.9.2 standalone

I just started to test Elastic Endpoint Security (from now on "EES"). I managed to enable all the necessary settings on the ELK host side and deployed EES in two different hosts: one running Windows 10 and the other in the same server where ELK is running , which is running Debian 10.

Apparently both machines were successfully enrolled.

  • Under Ingest Manager/Fleet, I am able to see both machines online, using the same agent config, and same agent version.
  • Under Observability/Metrics, I am able to see data from both machines.
  • According the agent logs, there is no communication issues, error, nor warnings

Questions:

  • Under Security/Administration/Hosts, I am able to see only the Windows machine. Is it to be expected that machines running Linux/Debian are not shown here? Only Windows or Mac? If Debian hosts should be here too, can you please help me to fix what I am missing or doing wrong?
  • Under Observability/Uptime, I am able to see only the Windows machine, not the Debian. I previously installed Heartbeat on Windows. Does EES not include Heartbeat under the hood, like it does with Metricbeat and Filebeat? If not, are there plans to include it? It would be great to just install one agent containing all beats (EES) on each machine instead of installing beats separately. If EES does include Heartbeat, can you please help me to fix what I am missing or doing wrong?

Thank you in advance

Hi @ManuelF thanks for trying out Endpoint Security.

I'll focus on your first question regarding the Security tab.

  • Under Security/Administration/Hosts, I am able to see only the Windows machine. Is it to be expected that machines running Linux/Debian are not shown here? Only Windows or Mac? If Debian hosts should be here too, can you please help me to fix what I am missing or doing wrong?

First, you should check if the Endpoint is successfully connecting to Elasticsearch. Can you take a look at the Endpoint logs on your Linux machine? They should be located here: /opt/Elastic/Endpoint/state/log/

In the logs, if you see the message: Elasticsearch connection is down repeatedly, then the Endpoint isn't streaming any data to ES that the Security tab uses.

If you do not see the message that ES connection is down, refer to this post that troubleshoots some other connection issues: Endpoint 7.9 "Degraded and dashboards" - #18 by ferullo

If it looks like you're connected to ES and streaming data, let me know what else you see in the logs, or feel free to share them directly if you're comfortable with that, and we can dive deeper.

I will pull in someone else who is more familiar with Observability to help with your second question.

Hi @Kevin_Logan,

Please forgive me for taking so long to respond. This seems to be a very useful tool and I want to try it the most I can.

I just realized that Elastic Agent and Elastic Endpoint are two different apps that run independently, but both are installed with Elastic Agent.

I tried what you said and it looks like Endpoint is not reaching the Elasticsearch node
$ sudo tail -f /opt/Elastic/Endpoint/state/log/endpoint-000000.log | grep "Elasticsearch connection"

Log output

{"@timestamp":"2020-10-02T20:14:19.183060099Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:24.212688642Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:29.236989575Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:34.266900353Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:39.295759086Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:44.317784004Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}
{"@timestamp":"2020-10-02T20:14:49.339259232Z","agent":{"id":"89902d38-fad2-4f85-a12e-c626c7adf4c8","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":493,"thread":{"id":629}}}

What should I try next for troubleshooting? I was unable to find an elastic-endpoint service.

Is there any solution for this? Anything I can try for troubleshooting?

The logs indicate that elastic-endpoint is not connecting to Elasticsearch (running both in the same machine). Also checked the elastic-endpoint.yaml file and configuration seems to be correct:

output:
  elasticsearch:
    api_key: 7Y-023QB7fEBSdH0q-_-:el4rkrpdSemAOMlRgu1VTg
    hosts:
    - https://127.0.0.1:9200
revision: 2

Hi @Kevin_Logan,

Anything we can do about this issue? I really wish to have fully running Elastic-Endpoint in Windows and Linux machines.

Thank you in advance

Are you running the agent in the same host as elasticsearch that is why you are using https://127.0.0.1:9200 ?
Can you share your elasticsearch.yml config ?

@ManuelF Apologies for the late response.

One of the usual problems when the Endpoint is not connecting to ES is if users are using a self signed cert. Are you using a self-signed certificate on your elasticsearch host?
https://127.0.0.1:9200

If you are using the self sign cert, for the time being, follow this thread which has a workaround and more debugging options: Missing Elastic Security and endpoint integration data

This is a known issue with passing the cert down to the Endpoint which will be fixed in an upcoming release.

Let me know if the above is the issue. If not, we can dive further.

1 Like

You are right. I am running elastic-agent in the same machine running Elasticsearch. That's why I use 127.0.0.1. Although this is only a dev server, not intended for production.

elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
network.host: 192.168.0.37,127.0.0.1,localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
discovery.type: single-node
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: "/etc/elasticsearch/certs/elastic-certificates.p12"
xpack.security.transport.ssl.truststore.path: "/etc/elasticsearch/certs/elastic-certificates.p12"

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: "/etc/elasticsearch/certs/http/elasticsearch/http.p12"
xpack.security.http.ssl.truststore.path: "/etc/elasticsearch/certs/http/elasticsearch/http.p12"
xpack.security.http.ssl.client_authentication: optional
xpack.security.http.ssl.verification_mode: certificate

xpack.security.authc.api_key.enabled: true

Hi Kevin,

No worries for the delay. I appreciate your help. Indeed I am using a self-signed certificate. So far I have been able to use it to configure Elasticsearch, Kibana, Logstash, but I was not expecting that would be an issue for Elastic Endpoint. I understand that this app is still under development, so I'm confident that the Elastic Team will find a reliable solution. In the meantime I'll try the workaround and let you know how it goes.

Thank you

it's an issue with the certificates getting passed to Endpoint and it not trusting them properly. Currently the fix is to add the CA for your certs to the endpoint's local machine trusted CA certificate store.

Hi @Kevin_Logan and @ylasri,

When I was setting up the elastic-agent I got a trust error related to the certificate. After some research I was able to resolve that issue by adding the CA cert to /etc/ssl/certs/, then I was able to setup the elastic-agent, but for some reason the elastic-endpoint did not the same. Now, after reading this thread, I added the same CA cert to /usr/local/share/ca-certificates/, Then restarted the elastic-agent, although I have to say that restating the elastic-agent does not seems to restart the elastic-endpoint process. Even with the agent service stopped, the endpoint process keeps running.

Am I missing steps?

Then all you need is to follow the workaround provided by @Kevin_Logan In this link Missing Elastic Security and endpoint integration data

Can you tell me how can I restart the elastic-endpoint process? Restarting elastic-agent does not seems to restart elastic-endpoint process.

Thanks

Didn't try it on Linux but on windows it an installed service by the agent called Elastic Endpoint....

In the meantime in order to update the elastic search address endpoint is using please try the following workaround:

  • In ingest manager, under the main settings menu, you can update,add,change the Kibana and Elasticsearch URLs. Click save.
  • Afterward, under the Configurations tab of ingest manager, click on the Configuration assigned to the endpoint you want to update.
  • On the Configuration page, in the integrations tab, click the actions "..." for the Elastic Endpoint Security integration and select "Edit integration"
  • On this next page, click "Save integration" in the bottom right (you do not need to make any changes).

This should trigger an update to the configuration for the agent which will propagate down with the new global settings applies.

I am trying elastic-endpoint in Windows 10 and Debian 10. Mi issue is with Debian, as in Windows runs smoothly and is reporting fine to Security.

The issue here seems to be a known issue were you are unable to set multiple addresses for kibana to be used by the endpoint config. You can set more that one URL for Elasticsearch, but this is not the same for Kibana. The agent running in Windows is using the external IP address of the server, but the agent running in Debian have to use 127.0.0.1.

I appreciate your help and I will try the workaround.

Thank you

Hi @Kevin_Logan,

I am trying the workaround. In the meantime, could you tell me how to start/stop the elastic-endpoint process (or how to restart it). Restarting the elastic-agent does not restart elastic-endpoint.

Thank you

Hi @ManuelF since Agent worked but Endpoint didn't, this appears to be a bug in Endpoint. Can you share the instructions or steps you initially took to get Agent working so we can follow them and fix this in Endpoint. Thanks!

Hi @ManuelF

In order to restart the elastic-endpoint, you should be able to run: sudo systemctl restart ElasticEndpoint on Linux.

EDIT: in addition, you should run: update-ca-certificates to ensure that the Endpoint will recognize the new certs.

If this is still not working, try following the commands from this comment:

As @ferullo has said, this could be a bug in the Endpoint restarting upon restart of the elastic-agent, so the steps you took will be helpful.

Hi @ferullo and @Kevin_Logan,

Thank you for assisting me on this issue. I believe the Elastic Agent and all the Integrations that come with it are a great tool. I will try to provide all the details I can so you can use it to improve the app.

Some notes:

  • The Elastic Agent + Elastic Endpoint works great on a Windows 10 machine

  • The AV previously installed on this machine detected the Endpoint as a malware and added to quarantine, then scheduled a delete on next system reboot.
    Fix: Add Endpoint folder under Program Files to the exceptions list

  • I had an initial issue when attempted to register de agent with Elasticsearch due to the self-signed certificate in both OS (Windows 10 and Debian 10).
    Fix: Manually add the self-signed cert to the Trusted Root Certification Authorities/Certificates and ssl/certs/

    Steps-Windows:
    • Enter Start | Run | MMC.
    • Click File | Add/Remove Snap-in .
    • In the Add or Remove Snap-ins window, select Certificates and click Add.
    • Select the Computer account radio button when prompted and click Next.
    • Select Local computer (selected by default) and click Finish.
    • Back in the Add or Remove Snap-ins window, click OK.
    • In the MMC main console, click on the plus (+) symbol to expand the Certificate snap-in.
    • Navigate to Personal | Certificates pane.
    • Right-click within the Certificates panel and click All Tasks | Import to start the Certificate Import Wizard.
    • Follow the wizard to import the signed certificate along with the private key. The certificate file must be in a container format having both the end user certificate and its private key.
    • Now move the self-signed certificate from folder Personal/Certificates to Trusted Root Certification Authorities/Certificates

    Steps-Debian:
    $ sudo cp /etc/elasticsearech/certs/http/kibana/elasticsearch-ca.pem /etc/ssl/certs/
    $ sudo update-ca-certificates

Besides the above steps to fix the initial issues I found, I just followed the official steps to install the Elastic Agent:

  • Open Ingest Manager/settings and set values for “ Global output ”.
  • Under Ingest Manager/Configurations set one or more configuration profiles as needed to include different integrations
  • Went to Ingest Manager/OverviewAdd agent ” (Used Fleet for deploying)
  • Assigned the configuration profile to be deployed for each agent
  • Downloaded and installed the agent on each OS
  • Enrolled the agents using the URL and token automatically generated in Kibana
  • Enabled/started service

Notes:

  • Initially the agent was not checking in with Elasticsearch in Debian, so I edited fleet config (/etc/elastic-agent/fleet.yml) to set host: 127.0.0.1:5601 for the Kibana connection. Note that I installed the agent on the same host running ELK.
  • I also had to edit the action store config (/usr/share/elastic-agent/action_store.yml) to set - https://127.0.0.1:9200 for the Elasticsearch connection (under the outputs: block). I noticed later that you can set multiple addresses for Elasticsearch, so I did it this morning.
  • Once fixed the connection to Kibana and Elasticsearch, I restarted the elastic-agent service and the Debian agent started to checking in.
  • As for the endpoint-agent, has been working fine in Windows, but not in Debian (not connecting to Elasticsearch). I found the issue by checking logs in /opt/Elastic/Endpoint/state/log/endpoint-000000.log. I was able to fix this issue just yesterday after trying the workaround found in other post:
    • In Ingest Manager , under the main Settings menu, you can update/add/change the Kibana and Elasticsearch URLs. Click save.
    • Afterward, under the Configurations tab of Ingest Manager , click on the Configuration assigned to the Endpoint you want to update.
    • On the Configuration page, in the Integrations tab, click the actions "..." for the Elastic Endpoint Security integration and select "Edit integration"
    • On the next page, click "Save integration" in the bottom right (you do not need to make any changes).
  • The above workaround made possible that the endpoint-agent installed on Debian finally started to send data to Elasticsearch, but Kibana became unstable after only a couple of minutes. I was monitoring the system resources and CPU load (4 cores) was below 20% and RAM was 9.5 from 16 GB total. Every action I tried to execute on Kibana returned an error. Elasticsearch seemed to be too busy processing its own data plus all the data it started receiving from elastic-agent and endpoint agent. Other systems running on the same host kept working fine. Only Kibana became unusable. This behavior was not resolved by restarting ELK not rebooting the system. I had to stop elastic-agent and kill endpoint process. Then restarted Kibana and it became stable again. This morning I tried a different configuration profile, just including the integration system, hopping the load was less aggressive, but I got the same result, so I had to stop elastic-agent service on the Debian machine.
1 Like

@ManuelF

Just my 2cents but remove the private key from the windows cert store. You only want to import the Public certificate from the remote device into another machines store. It might be a work around but it's a none secure method as your essentially not doing anything encrypted as all an adversary has to do is get a single cert to decrypt all traffic to kibana for all endpoints you do this method on.

You might know that already but it's in case someone else reads the thread and thinks it's ok.