Tagging incoming data per environment

Hi.
We have a number of environments accessible via jumpboxes, each individual environment uses NAT and internally they all contain the same servers on the same internal IP addresses.
e.g. EnvironmentA contains server1 on 192.168.0.1, server2 on 192.168.0.2.....
EnvironmentB also contains server1 on 192.168.0.1, server2 on 192.168.0.2......
We wish to ingest windows event logs from servers in each environment and use fleet to manage the agents (using the custom windows log integration).
Each environment contains nginx as a reverse proxy to access both fleet server and elasticsearch nodes. I have configured nginx to add a custom "Server_Environment" header (e.g. set to EnvironmentA).

I can successfully install the agent (using managed fleet) on servers in the various environments and I tag the agent with the environment name.
That allows me to filter the agents in fleet so I can view one environment at a time. I have not found any way to use this tag to filter any captured log data.
I have also been trying and failing to setup an ingest pipeline to access the header from nginx.
Lastly I have attempted to setup a processor on the custom windows log integration to add a field which is an environment variable (which I could then set to be the environment name on each agent). I can't get this to work and I'm not sure it's even supported.
At the moment the best I can do is have an agent policy per environment which then allows me to use a literal value (of the environment name) in the processor for the integration task which obviously is not very scalable but currently is the best I can do.

Any thoughts & advice would be most welcome.

Thanks

Steve

Did you try to add a field while ingesting the data? (Elastic Agent)

Hi. I did attempt this but couldn't get it to work. I couldn't find any examples of reading in x-header data - do you know of any?
What I'm doing now is an agent policy per environment and then using add fields processor per integration which stamps a field with the environment name, but I was hoping for something more global - I couldn't even get it to use an environment variable as the value (set on the agent) , could only make it work using literal values.
Thanks

Hi

Currently my Filebeat (plain install, not via Elastic Agent) is adding my field "env", which I can use in my Logstash pipeline or Kibana dashboarding afterwards. So every server knows which environment it is and adds that info straight away.

So which part is not working for the "add field" processor? What do the logs say?

To make it easier & avoid having a policy per environment, you could use environment variables:

I used it myself in the context of: Filebeat, Logstash, Elasticsearch & Kibana. But not yet Elastic Agent.. so can't help you on the specifics.

A nastier solution could be to add a translation based on hostnames in your Logstash. Simply deduct the env.. But my preference would be the env variables :wink:

Good luck!
Christof

PS: I don't know if it is possible to map HTTP headers added via Nginx. I would simply add them from source (not along the way). If you insist on this method, check the HTTP plugin: