Get agent event logs via Elastic Agent Integration

After the 8.15.0 update, with the separation of event logs belonging to filebeat and metricbeat, we can no longer monitor the event logs belonging to the agent via Kibana with the Elastic Agent integration. There is only a notification saying "Cannot index event (status=400): dropping event! Look at the event log to view the event and cause."

Is there a chance to monitor the main reason for the errors we receive (event logs) via Kibana without connecting to the server the agent is connected to? As far as I understand, the separated event logs are not currently indexed to Elasticsearch with this integration. Is there a solution for this? Or will there be an update in the near future where we can see all the logs belonging to the agent in this integration?

Hi @edemir, Welcome to the community!

Yes, this has changed due to security concerns.

The design for the long-term fix is still being worked on.

So, at this time, unfortunately, the current approach of logging into the agent host is the temporary solution.

EDIT : See below for another option

1 Like

@edemir, the event logs are in the diagnostics bundle, you can request them via Kibana, download onto your machine and look at the logs. If you prefer you can eve upload them on Kibana to analyse.

2 Likes

@stephenb @TiagoQueiroz are there any updates in this department? Is there an easy way to get the dropped events automatically now, or still not? This manual diagnostics download is tedious.

Not yet @Daantie.

I get it can be tedious to request/download the diagnostics to get access to the events log, however due to security concerns that's the best option at the moment.

1 Like

@TiagoQueiroz thank you for the update. Could you elaborate more on the security concerns? How does this impact security?

We have a similar flow with Logstash DLQ pipeline that sends DLQ messages to an index, so we can reprocess those messages easily once the problems have been fixed. Sure, we can output the agent to Logstash, but we prefer the direct connections. Makes me wonder why there isn't some DLQ option in the agent.

Events can contain sensitive/private data, in some cases it is not desirable for this data to end up in a monitoring cluster.

I understand that, but why take away the possibility and not make it optional?
Some suggestions for your team (or the people who work on this):

  • Make the sending of event logs opt-in. In this way you make the users deliberately choose to use this and make them aware of the consequences.
  • Introduce a DLQ for agents with the option to reprocess them. Probably more work for you, but a clean solution.

I hope something like this is already in the works. Thanks anyway for all your answers!

Oh I just noticed the new “failure store” feature for data streams which addresses this problem: What’s new in 8.19 | Elasticsearch Guide [8.19] | Elastic

1 Like