Let's say I want to monitor the system logs of a cluster of servers with filebeat. Because using one data stream per host doesn't scale well and the cluster is logically part of the same application, they all write to the same data stream with minimal write-only privileges.
I'm struggling to figure out how to avoid two potential obstacles during post mortem analysis in the case of a compromised host:
The lack of a server side timestamp (a "processed_at" in addition to the normal timestamp) allows a malicious client to date logs to an arbitrary time. While this doesn't allow overwriting existing documents, it makes it difficult to simply discard all events after a known-bad date.
A malicious client could impersonate other clients by adding documents with spoofed identifier fields, making it more difficult to reconstruct the actual course of events.
The ingest pipeline includes a set_security_user processor that would solve the attribution problem, but to my understanding, the pipeline application cannot be enforced.
Hi @nf4ray Welcome to the community! Good questions and good concerns...
That is not exactly accurate either as a default or final pipeline can be set on the server side in an index template to enforce behavior without the client side settings
So you can you absolutely enforce server side. most the security modules store an event.ingested just for that purpose which is the time the event was ingested into elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.