I've set up an an Elastic Agent and I am using the Microsoft DHCP module. Everything is working great except there is a slight issue in regards to going into the "Advanced Options" inside of the Integration. I'm attempting to follow this guide(Configure network map data | Elastic Security Solution [8.5] | Elastic) to assist with mapping an our internal network.
This is an example of my config in Advanced Options:
So, the conditional seems to run OK if I use the "contain" option but not "network." So, I don't know if this is a limitation of using Elastic Agent, a bug or I'm doing something wrong.
I predict that if you change the Agent logging level to Debug that a message is being logged that the network condition did not match because the value is an array and the network condition expects to match against scalar values only.
The contains conditional is setup to check if any value in an array contains the given string.
You could do something similar using Elasticsearch Ingest Node and Painless scripting. It has a CIDR type with a contains function. You could loop over the values and add the geo fields if any one IP matches.
Or we could potentially enhance the network condition to support arrays.
I have debug turned on and I don't see any errors in regards to not matching. From what I can tell, host.ip appears to a string? This was a section of an ingested document.
This is an implication of what's mentioned in the description for the processors in the UI. You would need to move this over to ingest node to have access to the parsed out host.ip field.
Thanks for the replies! So, just to reiterate, this needs to be set in the ingest node as opposed to being in the processor. And probably the best way to go would be using a painless script?
Yes, a script is the only way that I know of in Ingest Node to get access to something that can do CIDR matching. This tutorial explains that if you create your own pipeline following this naming <type>-<dataset>@custom then Fleet will invoke that pipeline after it has parsed the data with the integration's main pipeline. (Be aware that there is a bug fix related to @custom pipelines that is not released yet [Fleet] Add the @custom pipeline only to the main datastream ingest pipeline by nchaulet · Pull Request #144150 · elastic/kibana · GitHub, it works but it has some unintended behavior).
Below is kind of a hack since it duplicates parsing, but this could be done via the processors in Agent. It exacts the IP to a temporary field such that you can use it in the add_fields.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.