We have 50+ sources in which auditbeat version 7.8.0 is installed. The OS used is predominantly Windows in all these sources.
These 50+ sources are sending the logs directly to a 2 node Elasticsearch cluster (there is no Kafka-Logstash pipeline involved in between beats and the ES cluster)
Now I have the following setup configured in my auditbeat.yml output.elasticsearch.pipeline: geoip-info
Also the pipeline processor named geoip-info is existing in the cluster.
Now my issue is that I can see host.ip fields populated for all the machines in most of the documents. But they are not the public ips, they are just the private ips .
But to make things complex, for some hosts, the public ips are being captured by the auditbeat.
My requirement is to get the geographic location by using the IP addresses.
Is this the expected behaviour?. If yes, this method of getting geo information is inconsistent and I would like to know what are the alternatives that I can employ here?
Can anyone comment on this.
Btw, I tried all the beats too other that Auditbeat, just to make sure that this is not an Auditbeat specific issue.
@adrisr
i dont think its shown.
This is the output of ipconfig /all for one of the machines which is not showing its public IP in the auditbeat data (host.ip field should ideally contain it. But to make sure, i checked that along with "source.ip" and "destination.ip" too)
@curiousmind Beats will only add to host.ip the addresses that are assigned to your interfaces. That's the same addresses that show up in ipconfig /all. In the above case you'll get:
"host.ip": [
"192.168.56.1",
"192.168.1.4"
]
(Maybe also fe80::307b:d6b1:9f06:cebe%16 and fe80::1%7, I'm not sure if link-local IPv6 are added or not).
If by public IPs you mean an address that is added after NAT by some outside router, then Beats cannot possibly know about this address.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.