So I upgraded my stack to 7.9 last night to try out elastic security, but thing is I am able to enroll the agent usings fleets update the configurations and all but my agent is not able to send any data to elasticsearch, I tried multiple things but its not working my elasticseaarch has https enabled but there is no such configuration on the elasticagent to disable ssl verification.
same here, 7.9.0 update seems to have resulted in all my data being lost it seems - nothing showing in kibana.
trying elastic-agent instead enrolls ok but doesn't create any data.
going to restore my 7.8.1 vm snapshot
hi all, is it possible that you had Ingest Manager enabled in 7.8 on the cluster in question that you have now upgraded to 7.9? If so, there are some Kibana 'Saved Objects' that will need to be deleted in order for the system to work fully as it is intended. This is because 7.8 to 7.9 was not a supported upgrade migration path (for Ingest Manager which is now Beta in 7.9).
The Saved Object types for Ingest Manager that would need to be deleted (and then re-start the cluster) are:
epm-packages
fleet-agents
fleet-agent-events
fleet-agent-actions
fleet-enrollment-api-keys
ingest-agent-configs
ingest-datasources
ingest-outputs
ingest_manager_settings
You can try that and see?
I can also cite if you started on 7.9 fresh, then it would be something else, and I can guess at least a few things. Please check the guide we wrote that covers a lot of common cases!
https://www.elastic.co/guide/en/ingest-management/7.9/ingest-management-troubleshooting.html
Apart from what is there, it could be maybe a few other things, check the product docs, but I'll pass along a few things:
-
can you confirm what you see in the top right corner in Ingest Manager ‘Settings’ view in Kibana?
These are the main settings. If they look sane, that’s good, if not, like if they are blank or have the wrong port that can easily be fixed by correcting it there. -
if Elastic Agent doesn’t show up at all, there was likely some communication problem between the host + Kibana, check the host for the Agent logs see what error messages may be there. Check it with a 'ping' or other command to verify networking communication from the host to Kibana and correct what comes from the test.
-
If Agent shows up in Fleet but no documents show up in ES, did you remember to 'run' it?!
That's about all I can think of generally, without more info. Hope it helps!
Write back and we can keep poking it - thanks so much for trying it out, we look forward to all feedback!
Hi,
Thanks for your suggestions, I have checked them thoroughly and everything seems fine, and ingest manager was not enabled in 7.8 for us. Everything seems to be working apart from the agent sending documents to Elasticsearch. Which logs should I check in order to find the issue?
Do you have a self signed certificate or are you running on Elastic Cloud? For the self signed certificate, this issue here might be useful: https://github.com/elastic/kibana/issues/73483#issuecomment-676419501
For the logs, you should see them under ./data/logs/default
. In here you have the logs for each process running like filebeat
. I assume in the their you should see some errors around publish. Could you share these here?
Thank you very much. I was using self signed certificate I was searching for where could I disable ssl verification. Now it is sending metricbeat logs and filebeat logs but still no elastic endpoint data. I cant find elastic endpoint logs in the mentioned folder.
Can you verify the endpoint is running? Process should be named 'elastic-endpoint'. What OS are you running? I can give you the path to the logs. In general, if you find the directory the endpoint is running out of (/Library/Elastic/Endpoint on macOS for instance) the logs are in the state/log directory (so /Library/Elastic/Endpoint/state/log on macOS).
Hi, sorry for the confusion the elastic agent is also sending data, I am not seeing any data from endpoint security in es. I can see it’s running but don’t why no data is being sent to es.
I am running windows 10.
I think we are also not getting any of the elastic endpoint logs. I posted what we are seeing this morning here: Security -> Administration Page not getting past Enrollment
Elastic Stack is on Windows (Server 2016) and so is the agent (Windows 10).
We are using self-signed certs and used the work around and then we started to see the data streams.
It does require admin rights to see the logs on the endpoint and I found some logs here:
C:\Program Files\Elastic\Endpoint\state\log\endpoint-000001.log
Here is a snip of the end of the logs which indicate an issue connecting to Elastic:
{"line":1392,"name":"HttpLib.cpp"}}},"message":"HttpLib.cpp:1392 Establishing GET connection to [https://192.168.5.25:9200/_cluster/health]","process":{"pid":5672,"thread":{"id":4008}}}
{"@timestamp":"2020-08-24T13:46:10.63687900Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":5672,"thread":{"id":4008}}}
{"@timestamp":"2020-08-24T13:46:10.74635300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.81954600Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.81954600Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.81954600Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.82692600Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.82864800Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.82864800Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:10.82864800Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:11.1229300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:11.1229300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5158]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:11.1229300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [4689]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:11.1229300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:11.1229300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [4689]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:13.15113300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":461,"name":"ProcessCache.cpp"}}},"message":"ProcessCache.cpp:461 Failed to remove item with pid [1976] from retired cache","process":{"pid":5672,"thread":{"id":6656}}}
{"@timestamp":"2020-08-24T13:46:13.15113300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":461,"name":"ProcessCache.cpp"}}},"message":"ProcessCache.cpp:461 Failed to remove item with pid [9816] from retired cache","process":{"pid":5672,"thread":{"id":6656}}}
{"@timestamp":"2020-08-24T13:46:15.49389900Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1051,"name":"EventUpdater.cpp"}}},"message":"EventUpdater.cpp:1051 Unsupported Security Event type: [5156]","process":{"pid":5672,"thread":{"id":10212}}}
{"@timestamp":"2020-08-24T13:46:15.65066300Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":1392,"name":"HttpLib.cpp"}}},"message":"HttpLib.cpp:1392 Establishing GET connection to [https://192.168.5.25:9200/_cluster/health]","process":{"pid":5672,"thread":{"id":4008}}}
{"@timestamp":"2020-08-24T13:46:15.68399000Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"notice","origin":{"file":{"line":65,"name":"BulkQueueConsumer.cpp"}}},"message":"BulkQueueConsumer.cpp:65 Elasticsearch connection is down","process":{"pid":5672,"thread":{"id":4008}}}
{"@timestamp":"2020-08-24T13:46:15.22806100Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":661,"name":"File.cpp"}}},"message":"File.cpp:661 ioStatusBlock.Status=0, status=0x0","process":{"pid":5672,"thread":{"id":10380}}}
{"@timestamp":"2020-08-24T13:46:15.24800600Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":661,"name":"File.cpp"}}},"message":"File.cpp:661 ioStatusBlock.Status=0, status=0x0","process":{"pid":5672,"thread":{"id":10504}}}
{"@timestamp":"2020-08-24T13:46:15.26795200Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":661,"name":"File.cpp"}}},"message":"File.cpp:661 ioStatusBlock.Status=0, status=0x0","process":{"pid":5672,"thread":{"id":10508}}}
{"@timestamp":"2020-08-24T13:46:15.15960700Z","agent":{"id":"8203e9d6-b0dc-49d8-a579-b105a67bacad","type":"endpoint"},"ecs":{"version":"1.5.0"},"log":{"level":"info","origin":{"file":{"line":661,"name":"File.cpp"}}},"message":"File.cpp:661 ioStatusBlock.Status=0, status=0x0","process":{"pid":5672,"thread":{"id":4244}}}
It almost seems like the elastic-endpoint.exe agent is trying to ship logs directly to ElasticSearch? Is that how the Filebeat and Metricbeat work as well? If so, then maybe the Elastic Endpoint binary is not taking the self-signed certificate into consideration. We do see the bad cert in the ElasticSearch logs:
[2020-08-24T08:57:39,849][WARN ][o.e.h.AbstractHttpServerTransport] [node-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/192.168.5.25:9200, remoteAddress=192.168.5.71:55441}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
...trimmed
I´m also having the same problem. It looks like the elastic endpoint is trying to sending directly to elasticsearch without the certifcate and result in bad certificate. Until now did not have any response.
You are correct, elastic-endpoint.exe ships logs directly to ElasticSearch. Filebeat and Metricbeat do the same, if run via the agent.
I agree that it doesn't seem like Elastic Endpoint binary is using the self-signed cert. Can you please provide the elastic endpoint configuration for it?
It should be located here: c:\Program Files\Elastic\Endpoint\elastic-endpoint.yaml
Please only provide what's at these yaml paths,
output.elasticsearch.ssl.verification_mode
output.elasticsearch.ssl.certificate_authorities
Hey Nick!
Those two settings don't appear in the config.
output:
elasticsearch:
api_key: redacted
hosts:
- https://192.168.4.79:9200
- https://192.168.4.67:9200
- https://192.168.4.114:9200
revision: 5
Just to clarify, you used --insecure
when enrolling agent, correct?
Yes sir.
Thanks for bringing this to our attention, we are actively chasing down the issue.
To move forward, I wonder if you would be willing to run an experiment.
Can you add the self-signed certificate to the certificates trusted by the Windows endpoint?
Also, the hosts would likely need to be hostnames instead of ip addresses.
I was able to work around the issue using self signed certs by importing the local CA cert in the trusted root store for the computer account.
Hi there,
I'm having a similar issue, but I get an error in the elastic-agent logs: "x509: certificate signed by unknown authority" despite adding the ES node certificate and the ES self signed CA certificate to the Windows certificate store. Fleet enrollment went fine, but the agent can't check-in.
I posted a topic here: Fleet enrolment okay, but checkin fails
I'm happy to assist you solve this issue if you have things you would like me to try.
Kind regards,.
John.
This worked for me to get data sets into Ingest Manager and I can see the host in Security Administration but no other data is coming in. No index and no host in Security. I am assuming the endgame index missing is the issue. What do I need to change to get that data flowing?
Thanks
Index patterns in Advanced settings for Security resolved this for us logs-* metrics-*