Hosts duplicated with and without fqdn

Hi

I have 7.8 beats installed on Windows servers (audit, metric, file and winlog).

They all push events directly to ES

In the SIEM, i see my hosts duplicated : once with simple name, once with fqdn

It seems winlogbeat uses fqdn but not the others.

Is there a way (with sample) to align all my beats to use either simple name or fqdn ?

Thanks

1 Like

Hello Christian,

if I am not mistaken the field "host.hostname" contains the shortname for winlogbeat.

Hello

You are right host.hostname is short on every beats.

But Winlogbeat is the only one with fqdn in the field host.name, and it seems it is host.name which is used for the SIEM Hosts table.

Is it something i can solve by configuring Kibana ? Elastic ? or is it something i need to change in my beat .yml file ?

I just need to have something unified and not duplicating displayed entries.

Thanks & Regards

Hello Christian,

there is a way to do this, however it would probably force you to re-index your windows index in order to have the same consistent name across your whole data. Otherwise you'll still see your Host two times.

I will give you a small write-up on how I achieved the shortname instead of the FQDN for my windows host:

What you need:

  • Winlogbeat
  • Logstash
  • Elasticsearch

winlogbeat.yml
Configure your Winlogbeat to send to a logstash instance.

Add a tag to identify this specific hosts logs.

processors:
  - add_tags:
       tags: "WindowsHost"

Logstash
You need a logstash instance to make use of the mutate filter, more specifically its Update Module.

Define a pipeline that looks for beat inputs. Mine looks like this:

pipe-beats.conf

input {
      beats {
        port => "5044"
      }
    }

filter {
  if "WindowsHost" in [tags]{
  mutate {
    update => { "[host][name]" => "YOUR_SHORTNAME" }
 }
  mutate {
    remove_tag => ["WindowsHost"]
 }
}
}


output {
  elasticsearch {
    hosts => ["${ES_1_PROT}://${ES_1_IP}:${ES_1_PORT_REST}","${ES_2_PROT}://${ES_2_IP}:${ES_2_PORT_REST}"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}"

    user => redacted
    password => redacted
  }
}

Disclaimer: You will need to create a pipeline.yml file in order to tell logstash where your pipeline is located.

The input part is rather self explanatory. It's looking for any beat input coming in through port 5044 (default logstash port)
In the filter section we first check if the log has the Tag "WindowsHost", if that is the case we apply two different mutate filters.
First we update the field host.name (it is imperative that you write it as [host][name], it will NOT work with host.name) with "YOUR_SHORTNAME".
Second, this is optional, we remove the WindowsHost tag

The output part just points the transformed log to your elasticsearch instance and the index I set up for it.
Beware that I am using environment variables here.

I hope this helps

Hi Madduck

Thanks for the answer and proposal.

Unfortunately, Logstash can not be involved in my landscape.

I saw that there is an open issue on that topic for winlogbeat on the GitHub (https://github.com/elastic/beats/issues/18056).

In parallel, i tried to play with the processors of the Winlogbeat to copy the field from host.hostname into host.name, but it did not work.

I was wondering if it is possible to change the settings in SIEM to do not use host.name by host.hostname

My last chance would be to add an ingest processor in Elasticsearch for every incoming Winlogbeat event.

Any other guess ?

Again, thanks for your support

Best regards

Hello again

I declared an ingest pipeline in by ES instance.
Configured my winlogbeat.yml to use the pipeline
Works like a charm.

Ticket can be closed for now, even if it should be fixed within the github issue.

Thanks again

Hi Christian,

glad you found something that works for you.
To tie into your earlier question about doing it with winlogbeat processors thats also possible.

processors:
  -  drop_fields:
      fields: [ "host.name" ]
  - copy_fields:
      fields:
        - from: host.hostname
          to: host.name

You will have to drop the field first because the copy_fields function cant write into already existing fields.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.