Kibana Insertion of Sensitive Information into Log File (ESA-2023-25)
An issue was discovered by Elastic whereby sensitive information may be recorded in Kibana logs in the event of an error. Elastic has released Kibana 8.11.1 which resolves this issue. The error message recorded in the log may contain account credentials for the kibana_system user, API Keys, and credentials of Kibana end-users.
The issue occurs infrequently, only if an error is returned from an Elasticsearch cluster, in cases where there is user interaction and an unhealthy cluster (for example, when returning circuit breaker or no shard exceptions). In Elastic Cloud environments less than 5% of clusters are identified to have been affected.
Updates
Nov 15, 2023 - After additional investigation by Elastic Engineering, it has been determined that Kibana versions 7.x are not affected by this issue.
Nov 15, 2023 - Additional details are added for identification and remediation of credentials in logs.
Nov 16, 2023 - Added details about the conditions that cause the issue and likelihood of occurrence. Added details on Preventing Ingest of sensitive information from logs and Redacting sensitive information from logs. The CVSS severity rating is revised to 8.0 (High) CVSS:3.1/AV:A/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H.
Dec 1, 2023 - Added suggestion for customers with self-managed monitoring clusters on Elastic Cloud.
Dec 8, 2023 - Renamed kibana_system
to found-internal-kibana4-server credentials
in Elastic Cloud row about Remediating Sensitive information
Affected Versions
Kibana versions on or after 8.0.0 and before 8.11.1.
Solutions and Mitigations:
The issue is resolved in Kibana 8.11.1
Elastic Cloud
The following mitigations have been implemented in Elastic Cloud:
- We have purged sensitive data that was logged from our monitoring environment.
- We have deployed and are currently fortifying a redaction solution so that no new instances of sensitive information are logged in our monitoring environment and in customerās monitoring clusters.
For Elastic Cloud customers with self-managed monitoring clusters, affected logs should be reviewed for any potentially sensitive data and if deemed necessary, follow up actions such as purging sensitive data from logs and rotating any potentially exposed credentials should be performed. Elastic Cloud customers with self-managed monitoring clusters are also strongly suggested to restart all their Kibana instances that are monitored in order to apply additional preventive measures.
As additional mitigation, Elastic Cloud customers on affected versions of Kibana are advised to upgrade to 8.11.1.
Self-Managed
Users on affected versions of Kibana in self-managed, ECE, or ECK, should upgrade to Kibana 8.11.1.
Affected logs should be reviewed for any potential sensitive data and if deemed necessary, follow up actions such as purging sensitive data from logs and rotating any potentially exposed credentials should be performed.
For users that cannot upgrade, see the section āPreventing Ingest of Sensitive Informationā for mitigation actions that can be applied.
Reviewing Logs for Sensitive Information
This section describes how to review logs to identify instances of potentially sensitive information in your logs.
Elastic Cloud customers with self-managed monitoring clusters
Affected log lines can be identified with the use of the following query:
message: ("headers" AND "x-elastic-product-origin" AND "authorization")
against the data views that contain Kibana logs on the monitoring cluster
Self-Managed with Elastic Stack Monitoring
If you are using Elastic Stack to ingest Kibana logs, affected log lines can be identified with the use of the following query:
message: ("headers" AND "x-elastic-product-origin" AND "authorization")
against the data views that contain Kibana logs on the monitoring cluster:
- If Elastic Agent Collection is configured:
logs-kibana.*-default
- If Filebeat Collection is configured:
filebeat-{version}
, where{version}
is the installed Kibana version.
Self-Managed without Elastic Stack Monitoring
If you are not ingesting Kibana logs in the Elastic Stack, you can search the log files on disk for occurrences where all the terms āheadersā
, āx-elastic-product-originā
and āauthorizationā
are present in the same log line.
ECE
On ECE, affected log lines can be identified with the use of the following query:
message: ("headers" AND "x-elastic-product-origin" AND "authorization")
against the cluster-logs-*
indices in the logging and metrics cluster.
ECK
If Stack Monitoring is enabled, affected log lines can be identified with the use of the following query:
message: ("headers" AND "x-elastic-product-origin" AND "authorization")
against the data views that contain Kibana logs on the monitoring cluster:
- If Elastic Agent Collection is configured:
logs-kibana.*-default
- If Filebeat Collection is configured:
filebeat-{version}
, where{version}
is the installed Kibana version.
Remediating Sensitive Information
If the review reveals that credentials have been included in the logs, the following remediation actions are recommended.
Installation Type | Remediation Actions |
---|---|
Elastic Cloud | found-internal-kibana4-server credentials credentials are being rotated by Elastic. End user credentials can be changed using the Management > Users UI in Kibana or the Change Password API. |
Archive Docker RPM/DEB | kibana_system credentials can be rotated using one of the available methods if a native user is used. If service account tokens are used, you need to delete the logged token and create a new token End user credentials can be changed using the Management > Users UI in Kibana or the Change Password API. |
ECE | found-internal-kibana4-server credentials can be rotated using advanced editor: 1. Go to advanced editor 2. Find ".resources.kibana[0].plan.cluster_topology[0].kibana.system_settings" 3. Set "elasticsearch_password" to a random string 4. submit End user credentials can be changed using the Management > Users UI in Kibana or the Change Password API. |
ECK | Kibana system user credentials can be rotated by either of the following options: - Delete $KIBANA_NAME-kibana-user secret in the Kibana K8s namespace and restart the ECK operator. - Delete $KIBANA_NAME-kibana-user and $NAMESPACE-$KIBANA_NAME-kibana-user secrets in the Kibana K8s namespace End user credentials can be changed using the Management > Users UI in Kibana or the Change Password API. |
Preventing Ingest of Sensitive Information in Logs
For users that cannot upgrade, the following mitigation actions can be applied to prevent ingestion of sensitive information into an elasticsearch logging cluster.
The following mitigations are for customers who have enabled the monitoring features for their clusters as outlined in this guide.
If you are using Elastic Stack to ingest Kibana logs, use ECE, or use ECK with stack monitoring enabled , you can use an Ingest Pipeline to redact sensitive information from being ingested in a logging cluster.
- Create the remediation pipeline:
PUT _ingest/pipeline/redact-esaā2023-25
{
"description": "Prevent Insertion of Sensitive Information into Log File (ESA-2023-25)",
"processors": [
{
"dot_expander": {
"description": "Expand 'event.original'",
"field": "event.original"
}
},
{
"set": {
"field": "message",
"value": "[redacted]",
"if": "ctx.message != null && !ctx.message.toLowerCase().contains('[redacted]') && ctx.message.toLowerCase().contains('headers') && ctx.message.toLowerCase().contains('x-elastic-product-origin') && ctx.message.toLowerCase().contains('authorization')"
}
},
{
"set": {
"field": "event.original",
"value": "[redacted]",
"if": "ctx.event?.original != null && !ctx.event.original.toLowerCase().contains('[redacted]') && ctx.event.original.toLowerCase().contains('headers') && ctx.event.original.toLowerCase().contains('x-elastic-product-origin') && ctx.event.original.toLowerCase().contains('authorization')"
}
}
]
}
-
Identify the indexes or datastreams that your Kibana logs are being ingested into.
-
Identify whether these indices have a default pipeline configured by checking for the
index.default_pipeline
setting on the indices themselves and the templates.a. If the indices DO NOT configure a default pipeline:
- Configure the template to add the
index.default_pipeline
setting pointing to the exampleredact-esa-2023-25
pipeline. - Update the settings of any existing indices to use
index.default_pipeline: redact-esa-2023-25
.
b. If the indices DO configure a default pipeline:
- Identify the name of the pipeline from the setting.
GET /_ingest/pipeline/{pipeline_name}
- Add the processor
redact-esa-2023-25
as an additionalpipeline processor
to the body of the default pipeline - Use PUT _ingest/pipeline API to update the default pipeline to the new configuration
- Configure the template to add the
{
"pipeline": {
"name": "redact-esa-2023-25"
}
}
The instructions above give API usage examples. Custom pipelines can also be edited using the Ingest Pipelines UI. See Tutorial: Transform data with custom ingest pipelines
Self-Managed without Elastic Stack Monitoring
If you are not ingesting Kibana logs in the Elastic Stack, you should limit access to the directory where Kibana log files are stored.
ECE logging cluster instructions
This assumes the above ingest pipeline (redact-esaā2023-25)
has been created in the logging cluster as described above.
By default the ECE logging cluster ships with a single template for the cluster-logs-* indices and:
- It does not have a
āsettings.default_pipelineā
- There are no other templates with
āGET"
You should confirm this remains the case for you (āGET _template/cluster-logs-*ā
, look for ādefault_pipelineā
) and (āGET _template/ā
, compare the āindex_patternsā
fields) respectively, and contact support if there are.
Otherwise, you will then create an overlapping template just specifying the ingest pipeline:
PUT _template/cluster-logs-esaā2023-25-redaction
{
"index_patterns" : ["cluster-logs-*"],
"order" : 99,
"settings": {"default_pipeline": "redact-esaā2023-25"}
}
The next time the index rolls over, the message field will be replaced by ā[redacted]
ā for offending messages. You should then redact sensitive information from already ingested logs as described below.
Redacting Sensitive Information already ingested in logs
The same Ingest Pipeline can be used in conjunction with the Update By Query API in order to redact already recorded sensitive information in logs. This is applicable for all deployments which includes: Elastic Cloud customers with self-managed monitoring clusters, Self-Managed, ECE, ECK.
Repeat for all Kibana logging indices that contain sensitive information.
POST {environment-specific-logging-index}/_update_by_query?pipeline=redact-esaā2023-25&allow_no_indices=false&wait_for_completion=false&expand_wildcards=all&conflicts=proceed
{
"sort": [
{
"@timestamp": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"query": {
"bool": {
"filter": [
{
"bool": {
"should": [
{
"match_phrase": {
"message": "headers"
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"should": [
{
"match_phrase": {
"message": "x-elastic-product-origin"
}
}
],
"minimum_should_match": 1
}
},
{
"bool": {
"should": [
{
"match_phrase": {
"message": "authorization"
}
}
],
"minimum_should_match": 1
}
}
]
}
}
}
To check that the update_by_query task completes successfully:
GET /_tasks/{taskId from above request}
Severity:
CVSSv3.1, 8.0 (High) AV:A/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H
CVE ID: CVE-2023-46671