Missing beat.hostname from Filebeat Index Pattern

Using Filebeat for collecting Windows Firewall Logs.

Everything is going well except my Index Pattern does not include the beat.hostname.

Filebeat used to report the Host field, but since updating to 6.3.0, was removed. Was hoping to rely on beat.hostname but the field is missing?

Can share a bit more on your setup? Are you using Logstash?

Please share your Filebeat config file.

Yup. I'm using Logstash. Here's my logstash.conf for Windows Firewall Logging.

#Windows Firewall Input - Logstash Configuration
input {
  beats {
    port => 5175
    }
  }

filter {
 if [fields][logtype] == "windowsfirewall" {
  grok {
    match => { "message" => "%{GREEDYDATA:Date} %{GREEDYDATA:Time} %{WORD:action} %{WORD:protocol} %{IP:source_ip} %{IP:destination_ip} %{INT:SrcPort} %{INT:DstPort} %{INT:Size} %{GREEDYDATA:Size}" }
    }
  }
 }


output {
if [fields][logtype] == "windowsfirewall" {
 elasticsearch {
     hosts => ["localhost:9200"]
     index => "%{[@metadata][beat]}-%{[@metadata][version]}-windowsfirewall-%{+YYYY.MM.dd}"
     }
   }
 }


And here's my client side Filebeat.yml

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   - C:\Windows\System32\LogFiles\Firewall\*.log

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
    logtype: windowsfirewall

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  _source.enabled: true

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
setup.dashboards.enabled: true

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "erased for security concerns"
  
#----------------------------- Logstash output ---------------------------------
output.logstash:
  # Boolean flag to enable or disable the output module.

  # The Logstash hosts
  hosts: ["erased for security concerns"]

Do you see the field host.name?

Nope.

What version of Logstash do you use?

What do you get if you select the file output instead of Logstash output? Do you see beat.hostnamein this case? I'm trying to find out if we need to look closer into Beats or LS.

ELK Stack including Beats is all on 6.3.0.

Here is the filebeat output to file results:

{"@timestamp":"2018-07-17T13:17:05.067Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"message":"2018-07-17 08:16:51 DROP TCP 192.168.100.10 40.97.30.130 51340 443 0 - 0 0 0 - - - SEND","input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"prospector":{"type":"log"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":879696}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867286,"prospector":{"type":"log"},"input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"beat":{"version":"6.3.0","name":"IT-TEST-01","hostname":"IT-TEST-01"},"host":{"name":"IT-TEST-01"},"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.96.32.34 50325 443 0 - 0 0 0 - - - SEND"}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867372,"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.30.130 51190 443 0 - 0 0 0 - - - SEND","fields":{"logtype":"windowsfirewall"},"prospector":{"type":"log"},"input":{"type":"log"}}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"beat":{"version":"6.3.0","name":"IT-TEST-01","hostname":"IT-TEST-01"},"host":{"name":"IT-TEST-01"},"offset":867457,"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.31.50 51191 443 0 - 0 0 0 - - - SEND","source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","prospector":{"type":"log"}}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"prospector":{"type":"log"},"input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"host":{"name":"IT-TEST-01"},"beat":{"hostname":"IT-TEST-01","version":"6.3.0","name":"IT-TEST-01"},"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.124.194 51192 443 0 - 0 0 0 - - - SEND","source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867544}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"fields":{"logtype":"windowsfirewall"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867630,"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.154.66 51193 443 0 - 0 0 0 - - - SEND","prospector":{"type":"log"},"input":{"type":"log"}}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"prospector":{"type":"log"},"input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867716,"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.30.130 51194 443 0 - 0 0 0 - - - SEND"}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"},"source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867801,"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.31.50 51195 443 0 - 0 0 0 - - - SEND","input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"prospector":{"type":"log"}}

{"@timestamp":"2018-07-17T13:16:58.066Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.3.0"},"message":"2018-07-17 08:16:43 DROP TCP /var/ossec/bin/manage_agents -r 40.97.124.194 51196 443 0 - 0 0 0 - - - SEND","source":"C:\Windows\System32\LogFiles\Firewall\pfirewall.log","offset":867888,"prospector":{"type":"log"},"input":{"type":"log"},"fields":{"logtype":"windowsfirewall"},"beat":{"name":"IT-TEST-01","hostname":"IT-TEST-01","version":"6.3.0"},"host":{"name":"IT-TEST-01"}}

Ok, host.name and beat.hostname seem to be there. So we should further look at LS / KB.

You mentioned in the beginning it's not in your index pattern. Did you also check some of the events if it's inside or not?

How did you create the index pattern?

Correct, it's not in the Index Pattern or when I Discover the events. Here's a JSON view of an event from Kibana:

{
  "_index": "filebeat-6.3.0-windowsfirewall-2018.07.18",
  "_type": "doc",
  "_id": "epOerWQB8Q-obUirVvn1",
  "_version": 1,
  "_score": null,
  "_source": {
    "prospector": {
      "type": "log"
    },
    "message": "2018-07-18 08:39:25 ALLOW TCP 192.168.5.54 52.109.12.70 49780 443 0 - 0 0 0 - - - SEND",
    "DstPort": "443",
    "SrcPort": "49780",
    "Size": [
      "0",
      "- 0 0 0 - - - SEND"
    ],
    "input": {
      "type": "log"
    },
    "Time": "08:39:25",
    "action": "ALLOW",
    "source_ip": "192.168.100.10",
    "destination_ip": "52.109.12.70",
    "@timestamp": "2018-07-18T13:39:51.591Z",
    "Date": "2018-07-18",
    "protocol": "TCP",
    "fields": {
      "logtype": "windowsfirewall"
    },
    "source": "C:\\Windows\\System32\\LogFiles\\Firewall\\pfirewall.log"
  },
  "fields": {
    "Date": [
      "2018-07-18T00:00:00.000Z"
    ],
    "@timestamp": [
      "2018-07-18T13:39:51.591Z"
    ]
  },
  "sort": [
    1531921191591
  ]
}

I created the Index Pattern in the following steps:

Management > Index Patterns > Create index pattern > Define index pattern | Index pattern = filebeat-6.3.0-windowsfirewall-* > Configure settings | Time Filter field name = @timestamp > Create index pattern

Can you post the full mapping for the affected index here? That would help debug further.

Could you disable your filter in LS and check if you see all the fields as expected in discovery?

Wasn't letting me copy and paste from Kibana, so here's a screenshot of the Index Pattern.

Removed the filter from logstash.conf and restarted the logstash service. Still no host.name or beat.hostname, etc.

filter {
# if [fields][logtype] == "windowsfirewall" {
#  grok {
#    match => { "message" => "%{GREEDYDATA:Date} %{GREEDYDATA:Time} %{WORD:action} %{WORD:protocol} %{IP:source_ip} %{IP:destination_ip} %{INT:SrcPort} %{INT:DstPort} %{INT:Size} %{GREEDYDATA:Size}" }
#    }
#  }
 }

Found something interesting in the /var/log/logstash/logstash-plain.log

["LogStash::Filters::Mutate", {"remove_field"=>["host", "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip"],

[2018-07-19T09:04:20,561][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>30, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}, {"thread_id"=>31, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.0.14-java/lib/logstash/inputs/beats.rb:198:in `run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["host", "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type", "@src_ip"], "id"=>"faa67f1edfcf800e6e2c45153b6aed6a9c416222a35f8cda864481206361cbc3"}]=>[{"thread_id"=>25, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:418:in `read_batch'"}]}}

I found the issue.

Two conf files exist under /etc/logstash/conf.d/

01-wazuh.conf and logstash.conf

The 01-wazuh.conf was installed when I installed Wazuh Manager.

The filter under 01-wazuh.conf was the culprit:

mutate {
remove_field => [ "host", "timestamp", "beat", "input_type", "tags", "count", "@version", "log", "offset", "type","@src_ip"]
}

# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input {
    beats {
        port => 5000
        codec => "json_lines"
#       ssl => true
#       ssl_certificate => "/etc/logstash/logstash.crt"
#       ssl_key => "/etc/logstash/logstash.key"
    }
}
filter {
    if [data][srcip] {
        mutate {
            add_field => [ "@src_ip", "%{[data][srcip]}" ]
        }
    }
    if [data][aws][sourceIPAddress] {
        mutate {
            add_field => [ "@src_ip", "%{[data][aws][sourceIPAddress]}" ]
        }
    }
}
filter {
    geoip {
        source => "@src_ip"
        target => "GeoLocation"
        fields => ["city_name", "country_name", "region_name", "location"]
    }
    date {
        match => ["timestamp", "ISO8601"]
        target => "@timestamp"
    }
    mutate {
        remove_field => [ "host", "timestamp", "input_type", "tags", "count", "@version", "log", "offset", "type","@src_ip"]
    }
}
output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}

I thought you could have multiple conf files with different inputs, and have filters only apply to the data coming through that particular input. In this case, filters in both conf files applied to all Filebeat inputs.

I'll utilize IF statements to assign filters to Wazuh's input so it won't affect my windowsfirewall Filebeat input.

Thanks for the help!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.