Hi,
We are trying to fetch an environment variable to be added as a field for every log message coming in from the server where elastic agent is deployed. Similar to this issue link
The Elastic agent deployments are fleet enrolled Setup Details
Cluster 1 - Version details:
-- Elastic Agent: 8.16.6 and 8.12.2
-- ES and Kibana : 8.18.1
-- Fleet Server : 8.18.1
-- Elastic Defend Integration: v8.18.1-prerelease.0
Cluster 2 - Version details:
-- Elastic Agent: 8.16.6 and 8.12.2
-- ES and Kibana : 8.17.0
-- Fleet Server : 8.17.0
-- Elastic Defend Integration: v8.17.1
Tried custom field options - no luck with no error messages and the datastream totally stops from that agent.
-- ${env.VARIABLE_NAME}
-- {env.VARIABLE_NAME}
-- ${VARIABLE_NAME}
Tried adding the process.env.vars , which did bring some events with value VARIABLE_NAME=variable_value
Tried the enrichment value as well, no new field added but the datastream runs fine.
Objective:
Have this field (application ID) available in every log message which can then be queried by application owners from a common ECE instance with required access permissions.
Tagging alerts and enrichment with data against other tools which also have same VARIABLE_NAME and value
Alright so there are few things I had configure to achieve this (not completely clean)
Update the elastic-agent service to pick a environment variable, in my case DEPLOYMENT_NAME=edragent1 Ideally have an override.conf.
Since this policy has only a defend integration, update the following in the advanced setting as Custom.app_id=${env.DEPLOYMENT_NAME} within linux.advanced.document_enrichment.fields
You should be able to get that field however it will not be Mapped and that is to be handled on the template side.
Conclusion - Custom field option on the policy setting will not work with Elastic Defend integration unless someone comes with a WOW solution!! please help
I think that might be the case, do we know if there is a possibility to have it?
Also we can add a warning while adding custom fields on Policy level for integration type similar to what happens when you select fleet-server integration.
We are now going ahead with the pipeline processor to map and rename the unmapped field
I think that one of the issues is that this is not clear on the configuration page for the policy settings, it should have an warning saying that custom fields do not work for the Endpoint integration in the same way that we get an warning saying that the Logstash output does not work for Fleet Server.
Sure, i will put in the request, but if there is an option to provide enrichment data / fields, they should be mapped at least, and should have ignore_missing by default. Cause for some reason a server does not run the service with that environment variable shouldn't stop sending all datastreams.
We will ensure first to have all the elastic agents restarted to capture that environment variable running EDR, then we move ahead with the enrichment fields
We have a working solution now after setting that in the index template and a custom pipeline with an ingest processor.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.