Apache Access Logs with Filebeats->Logstash->EL<-Kibana: Throws errors at default dashboard

@SirStephanikus

Nope that is not your issue in 8.3.3

I just ran setup with

setup.ilm.check_exists: false

./filebeat setup -e

and the mappings are correct

GET _cat/indices?v
health status index                                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .ds-filebeat-8.3.3-2022.09.06-000001 u3YyG9i_Sh2XduYtAaVloQ   1   1          0            0       225b           225b

GET .ds-filebeat-8.3.3-2022.09.06-000001/
{
  ".ds-filebeat-8.3.3-2022.09.06-000001": {
    "aliases": {},
    "mappings": {
      "_meta": {
        "beat": "filebeat",
        "version": "8.3.3"
      },
      "_data_stream_timestamp": {
        "enabled": true
      },
........

        "host": {
          "properties": {
            "architecture": {
              "type": "keyword",
              "ignore_above": 1024
            },
....
            "hostname": {
              "type": "keyword",  <!----- This is Correct
              "ignore_above": 1024
            },
            "id": {
              "type": "keyword",
              "ignore_above": 1024
            },

Generally this mean Filebeat to process and Logstash to collect and forward as a passthrough.

If you want this to work you need to use this form of logastash pipeline see here...

Filebeat(Module) -> Logstash (Collect / Forward) -> Elasticsearch is very common and works great...

Because you are not properly calling the ingest pipeline the data fields are not being properly set.

input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}" 
      action => "create" 
      pipeline => "%{[@metadata][pipeline]}" 
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}" 
      action => "create"
      user => "elastic"
      password => "secret"
    }
  }
}

Glad you are digging in deep but you are missing basic concepts of correct mappings, writing data, setup, logstash as a passthrough, ingest pipeline...

Of course you can do all the parsing in logstasgh yourself but you would need to duplicate ALL the logic of the ingest pipeline and make sure all the proper field are set if you want the dashboard to work...

I just made

GET _ingest/pipeline/filebeat-8.3.3-apache-access-pipeline

Filebeat (Apache Module) -> Logstash(passthroug) -> Elasticsearch work with the default dashboards no problem... works great

If you want to do that it is pretty easy

  • Clean Up everything
  • Point filebeat at elasticsearch and run setup
  • Then Create a logstash conf that I referenced above working with modules.
  • Start logstash
  • Point Filebeat to Logstash
  • Start Filebeat
  • Go Look at Dashboards

I wrote about this many times..

Just make sure you use the correct logstash conf above.