We currently use stand-alone filebeat running as a systemd service to shipt cusom application logs to elasticsearch. On each application server, the main application creates/updates a file at boot time and populate it with some environment variables describing the application context and some other metadata (lets call this file /appname/context/env).
These environment variables provide context to logs comming from different servers and help with aggregation and reporting. To pass these environment variables to the stand-alone filebeat, after installing the filebeat service, we run the commands: sudo systemctl edit filebeat.service
and add an entry that looks like this:
[Service]
EnvironmentFile=/appname/context/env
This way, filebeat loads these environment variables every time it starts and we leverage them in the configuration file to enrich the logs by populating some fields prior to shipping to elasticsearch and this works well.
We are testing moving to the elastic agent and ship those same logs using the elastic-agent instead (using the custom log integration). While the integration works in terms of fetching and shipping the application logs to elasticsearch. We have lost the ability to provide those environment variables to the log shipper and can no logger leverage them to provide context.
Is there a way to set these variables so they can be reteived by elastic-agent managed filebeat?
Hi @Bryan_Hamilton,
Thanks for your post which help me to start solving your question.
I managed to pass environment variables to filebeat run by an elastic-agent centrally managed.
My test:
Stack 8.9.0 with 3 nodes on Linux Mint + one elastic-agent on Windows server.
On one of the 3 nodes, Logstash+fleet server
Elastic-agent deployed on the all servers with a "Custom Logs Integration" in the policy and tags=test to quickly find the result.
Inside the filbeat policy I have setup the following code:
- add_fields:
fields:
test: "${env.MYVAR}" # or sometime "${env.MYVAR}."
For Linux, environment variable is recorded inside a local file: /etc/sysconfig/elastic-agent which is referenced when you do: sudo systemctl edit elastic-agent.service
In the file I have setup the variable: MYVAR=This is a good test
Restart elastic-agent to take the variable in account
For Windows, add the variable in the system environment global variable property and restart elastic-agent service after
After triggering the filebeat custom log check, I found in my .log datastream record the value:
"fields": {
"test": "This is a good test"
}
Warning, it seems that sometime I need to put a character like '.' or space after the ${env.MYVAR} otherwise the agent will become Unhealthy (see example above).
I need to make more test but it seems working and I hope I will be able to use it inside an MSSQL integration to target ${host.name} and ${mssqlinst} which will be setup in the environment variable by my DBA.
Thank you for your input and for your time testing this out. Your input was very informative and helped me understand what I was missing. In my case my applications are all running on linux ubuntu. I had previously tried the following:
Setup a cron job to copy the /appname/context/env to /etc/sysconfig/elastic-agent at boot time
Call the vatiables from the config file:
- add_fields:
fields:
test: "${MYVAR}"
I was using the standalone filebeat way of calling variables and was not using the env. in variable call. Now after adding env. everything is working as expected. I haven't had to add a . at end so far but I will run some more tests.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.