If several systems (customers) are shipping logs via Filebeat-Logstash combination to a common Elasticsearch instance, can Elastic Security:
Ensure customers are authenticated when the log documents are indexed, and that
Documents created by that customer in an index are tagged with a customer identifier?
For (1) I am considering that customers can be represented as 'users' in Elastic Security. Beats transfer (logs and metric) takes place as an authenticated user (unique user per customer).
For (2) log & metric data are persisted with the customer identifier as a field in the document.
You cannot currently pass authentication from Beats though Logstash into Elasticsearch.
What you can do, is use TLS certificate based authentication from Beats to Logstash which ensures that you are only processing data from authorized Filebeat processes, and then pass credentials from Logstash to Elasticsearch to handle role based security.
Depending on your exact security requirements, there's 3 basic options:
Have a single Beats input pipelien in Logstash that
a. forces client certificates using ssl_verify_mode: force_peer
b. validates those against your certificate issuing CA; see ssl_certificate_authorities
c. adds the certificate metadata to the input event useing ssl_peer_metadata: true (See the source code for the details on the metadata)
d. Uses that metadata to add a field in the Elasticsearch output the holds the subject of the TLS certificate.
e. Have logstash authenticate to Elasticsearch using a single "logstash_writer" user, and trust that Logstash has done all the validation that is needed.
Have each different Beats user connect to a different Logstash pipeline.
a. You'd probably need to have CA per beats user so that Logstash could automatically handle the authentication for you, but you could do it by collecting ssl_peer_metadata and then have a filter that drops any messages that are from the wrong user.
b. Each pipeline could then connect to Elasticsearch as its own user (e.g. logstash_beats_user1), and you can apply additional security there.
c. You could either add the customer identifier in the pipeline, or in an Elasticsearch ingest pipeline.
Don't use logstash at all, and connect Beats directly to Elasticsearch with ingest node.
a. You lose some of the power of Logstash, but you get to remove an extra component, so it's a trade-off.
b. Each filebeat process can connect to Elasticsearch using a different userid.
c. Ingest node can add the userid to the incoming document.
There is another similar scenario where the MetricBeat --> Elasticsearch connection requires authentication and segregation. In this case are there options similar to the ones you mentioned above for the Filebeat-Logstash-Elasticsearch combination with and without using certificates. Moreover, in the case of MetricBeat connection to Elasticsearch using a different userID per customer and without any certificate, how would the ingest node pipeline configuration be able to add the userID to the incoming document? Is the value of the 'username' field specified in the 'output.elasticsearch:' section of the MetricBeat configuration available to a pipeline in the ingest node?
Metricbeat direct to Elasticseach is pretty simple.
Metricbeat can authenticate via username+password or via TLS certificates.
You can assign roles to those users and restrict which indices they can write to.
Unfortunately at this point in time, you cannot force them to use a particular ingest pipeline.
Yes (or the principal from the certificate if you use that for authentication)
Hi: I'm trying to access the value of the client certificate subject from an ingest node pipeline. Filebeat and Logstash are setup for mutual auth. I'm able to get the certificate details within the Logstash configuration if I use "%{[@metadata][tls_peer]}" but unable to get the same values in the ingest node pipeline. Is there a specific syntax to retrieve metadata within the ingest node pipeline. I tried:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.