Creating Dashboard for apache access logs using Filebeat

--- filebeat version 7.17.9
--- Yes in the dashboard section on kibana there are plenty of dashboards to use and the index named filebeat created by filebeat has apache access logs but I am not able to attach that index to the dashboard
--- Apache access logs
--- I have enabled apache.yml using command sudo filebeat modules enable apache
--- My kibana is working with nginx
--- Screenshot of dashboards



---- My kibana is working on port 443 and i am able to direct my apache access logs on kibana and as i have not mentioned any index name on filebeat.yml it creates index by the name of filebeat

Did you run

filebeat setup -e

After you enabled the module?

Can you show a sample apache access log in discover?

Are the fields parsed?

yes i have used the command filebeat setup -e after which dashboards were shown in kibana UI.
yes i have used that command after enabling the module
i haven't used any kind of filter but my logs are using inbuilt field names

Screenshot of all the fields my log stream has



Apologies I meant a screenshot of the dashboard filebeat Apache

And do the apache logs show in Discover?

Would like to see a text version of the json doc from from Discover.

In the previous reply I have sent you the screenshot of dashboards created by filebeat on kibana UI.
Yes apache logs are visible in discover by the name of filebeat.

Please click the link to the actual Filebeat Apache dashboard and open it and show the actual dashboard not just the link..make sure the time picker has proper time range try last 7 days.

In Kibana -> Dev Tools

Run this show the output

GET _cat/indices/filebeat*/?v

Run this show the first document first document

GET filebeat-*/_search
{
  "fields" : ["*"]
}

Please share
modules.d/apache.yml

This is the output of the command which was provided by you
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open filebeat-7.17.9-2023.05.23-000001 adireornfla 1 1 986514 0 272.3mb 136.1mb

vim /etc/filebeat/modules.d/apache.yml


- module: apache
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/etc/testinglogs/access-log.11"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:

This file will only be loaded or attempted to be loaded once...

So if in your many many tries it already had been read once it will not be read again filebeat keeps track of what it reads.

So perhaps add another log file or append more entries.

If you go to Discover and in the search bar put

event.dataset : "apache.access"

Do you get any results?

Nothing is coming when I search for event.dataset : "apache.access".
This is my log
2.1.3.4 - - [11/Apr/2023:13:44:47 +0530] "GET /st/mquery-2.22.3.nin.st HTTP/2.2" 200 22329
which is coming under message field and the rest of the fields in discover are giving my staging details

Hi @kriti_dabas

I asked to see an entire document but you did not provide.. I ask specific questions to help debug if you don't provide what I ask for I can not help.

Show me what a full Document from this looks like

GET filebeat-*/_search
{
  "fields" : ["*"]
}

It looks like your messages are not getting parsed

Please also run and show the output

GET _ingest/pipeline/filebeat-7.17.9-apache-access-pipeline

I created a file with your log sample

2.1.3.4 - - [11/Apr/2023:13:44:47 +0530] "GET /st/mquery-2.22.3.nin.st HTTP/2.2" 200 22329
2.1.3.4 - - [11/Apr/2023:13:44:48 +0530] "GET /st/mquery-2.22.3.nin.st HTTP/2.2" 200 22329
2.1.3.4 - - [11/Apr/2023:13:44:49 +0530] "GET /st/mquery-2.22.3.nin.st404 HTTP/2.2" 404 22329
2.1.3.4 - - [11/Apr/2023:13:44:50 +0530] "GET /st/mquery-2.22.3.nin.st HTTP/2.2" 200 22329
2.1.3.4 - - [11/Apr/2023:13:44:51 +0530] "GET /st/mquery-2.22.3.nin.st300 HTTP/2.2" 300 22329

Then I enabled the apache module

./filebeat modules enable apache

I edited the apache.yml

# Module: apache
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-apache.html

- module: apache
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/Users/sbrown/workspace/sample-data/discuss/apache-access.log"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

I ran setup.

./filebeat setup -e

Then I ran filebeat

./filebeat -e

Then I look in Discover


and see the message and they are parsed

Then I look at the dashboard with the correct timerange

So if you follow the same steps it should work.

As I stated before if you already loaded your test file, filebeat will not reload it. You need to rename it to test.

GET filebeat-/_search
{
"fields" : ["
"]
}
In the output there are logs which are confidential and the output is very long so can i send you just one part of it because i cant edit the whole part.
I have changed the name of my log file and then again i restart the filebeat and used your command ./filebeat setup -e and after that i ran ./filebeat -e, there is no error in the output of bot the commands and once again dashboard are there on kibana UI and logs by the name of filebeat-* in the discover part but dashboard are not using index filebeat

My logs are not getting parsed that is why the fields in dashboard is not matching with my apache accesslogs.

Please also run and show the output

GET _ingest/pipeline/filebeat-7.17.9-apache-access-pipeline

Also do you still have that log input enabled jn the filebeat.yml if so that could be the issue...

Disable that... and any other input

At this point there is something simple....

Also, since you don't want to take the time to anonymize some of the output, it's kind of hard for me to help.

I think we both agree it's not getting parse so that's okay I guess.

There's something simple going on....

You could always just uninstall and reinstall filebeat. Now that you know it's just the simple three command setup from the quick Start..

After all the editing of the files, there's probably something left over

GET filebeat-*/_search
{
  "fields" : ["*"]
}
type o{
  "took" : 93,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "filebeat-7.17.9-2023.05.26-000001",
        "_type" : "_doc",
        "_id" : "41SfVogB0_U351ZE1IlO",
        "_score" : 1.0,
        "_source" : {
          "@timestamp" : "2023-05-26T06:04:19.017Z",
          "ecs" : {
            "version" : "1.12.0"
          },
          "log" : {
            "offset" : 92580648,
            "file" : {
              "path" : "/etc/testinglogs/apache-access.log"
            }
          },
          "message" : """2.2.2.2- - [11/Apr/2023:23:54:32 +0530] "GET/sss/bootstrap.sss HTTP/2.2."200 65521""",
          "input" : {
            "type" : "log"
          },
          "host" : {
            "containerized" : false,
            "ip" : [
              "2.2.2.2",
              "ge08::2510:7j0c:c321:344b"
            ],
            "mac" : [
              "01:8b:9d:42:g4:07"
            ],
            "hostname" : "kriti.manifest.in",
            "architecture" : "x86_64",
            "os" : {
              "name" : "Oracle Linux Server",
              "kernel" : "5.4.17-2011.7.4.el8uek.x86_64",
              "type" : "linux",
              "platform" : "ol",
              "version" : "8.3",
              "family" : "redhat"
            },
            "name" : "kriti.manifest.in",
            "id" : "19837hifuedhiewh6"
          },
          "agent" : {
            "id" : "rythy54e578ytuh665yt57y6try",
            "name" : "kriti.manifest.in",
            "type" : "filebeat",
            "version" : "7.17.9",
            "hostname" : "kriti.manifest.in",
            "ephemeral_id" : "398uej275ruje3874jheui"
          }
        },
        "fields" : {
          "host.os.name.text" : [
            "Oracle Linux Server"
          ],
          "host.hostname" : [
            "kriti.manifest.in"
          ],
          "host.mac" : [
            "24:01:20:01:ja:nu"
          ],
          "host.ip" : [
            "2.2.2.2",
            "ge08::2510:7j0c:c321:344b"
          ],
          "agent.type" : [
            "filebeat"
          ],
          "host.os.version" : [
            "8.3"
          ],
          "host.os.kernel" : [
            "2871yw1h9u838.x86_64"
          ],
          "host.os.name" : [
            "Oracle Linux Server"
          ],
          "agent.name" : [
            "kriti.manifest.in"
          ],
          "host.name" : [
            "kriti.manifest.in"
          ],
          "host.id" : [
            "187hdhedjdfd73e74hr4"
          ],
          "host.os.type" : [
            "linux"
          ],
          "input.type" : [
            "log"
          ],
          "log.offset" : [
            92580648
          ],
          "agent.hostname" : [
            "kriti.manifest.in"
          ],
          "message" : [
            """2.2.2.2- - [11/Apr/2023:23:54:32 +0530] "GET/sss/bootstrap.sss HTTP/2.2."200 65521"""
          ],
          "host.architecture" : [
            "x86_64"
          ],
          "@timestamp" : [
            "2023-05-26T06:04:19.017Z"
          ],
          "agent.id" : [
            "kjmdo98ije3-09iuekf98"
          ],
          "ecs.version" : [
            "1.12.0"
          ],
          "host.containerized" : [
            false
          ],
          "host.os.platform" : [
            "ol"
          ],
          "log.file.path" : [
            "/etc/testinglogs/apache-access.log"
          ],
          "agent.ephemeral_id" : [
            "jndi438rjhf4778fjj3hf"
          ],
          "agent.version" : [
            "7.17.9"
          ],
          "host.os.family" : [
            "redhat"
          ]
        }
      },r paste code here

it is one part of this command because it is very lengthy, it will take time for me to edit 1000 lines as the data is confidential.

GET _ingest/pipeline/filebeat-7.17.9-apache-access-pipeline

{
  "filebeat-7.17.9-apache-access-pipeline" : {
    "description" : "Pipeline for parsing Apache HTTP Server access logs. Requires the geoip and user_agent plugins.",
    "processors" : [
      {
        "set" : {
          "field" : "event.ingested",
          "value" : "{{_ingest.timestamp}}"
        }
      },
      {
        "rename" : {
          "field" : "message",
          "target_field" : "event.original"
        }
      },
      {
        "grok" : {
          "field" : "event.original",
          "patterns" : [
            """%{IPORHOST:destination.domain} %{IPORHOST:source.ip} - %{DATA:user.name} \[%{HTTPDATE:apache.access.time}\] "(?:%{WORD:http.request.method} %{DATA:_tmp.url_orig} HTTP/%{NUMBER:http.version}|-)?" %{NUMBER:http.response.status_code:long} (?:%{NUMBER:http.response.body.bytes:long}|-)( "%{DATA:http.request.referrer}")?( "%{DATA:user_agent.original}")?""",
            """%{IPORHOST:source.address} - %{DATA:user.name} \[%{HTTPDATE:apache.access.time}\] "(?:%{WORD:http.request.method} %{DATA:_tmp.url_orig} HTTP/%{NUMBER:http.version}|-)?" %{NUMBER:http.response.status_code:long} (?:%{NUMBER:http.response.body.bytes:long}|-)( "%{DATA:http.request.referrer}")?( "%{DATA:user_agent.original}")?""",
            """%{IPORHOST:source.address} - %{DATA:user.name} \[%{HTTPDATE:apache.access.time}\] "-" %{NUMBER:http.response.status_code:long} -""",
            """\[%{HTTPDATE:apache.access.time}\] %{IPORHOST:source.address} %{DATA:apache.access.ssl.protocol} %{DATA:apache.access.ssl.cipher} "%{WORD:http.request.method} %{DATA:_tmp.url_orig} HTTP/%{NUMBER:http.version}" (-|%{NUMBER:http.response.body.bytes:long})"""
          ],
          "ignore_missing" : true
        }
      },
      {
        "uri_parts" : {
          "field" : "_tmp.url_orig",
          "ignore_failure" : true
        }
      },
      {
        "set" : {
          "if" : "ctx.url?.domain == null && ctx.destination?.domain != null",
          "field" : "url.domain",
          "value" : "{{destination.domain}}"
        }
      },
      {
        "remove" : {
          "field" : [
            "_tmp.url_orig"
          ],
          "ignore_missing" : true
        }
      },
      {
        "set" : {
          "field" : "event.kind",
          "value" : "event"
        }
      },
      {
        "set" : {
          "field" : "event.category",
          "value" : "web"
        }
      },
      {
        "set" : {
          "field" : "event.outcome",
          "value" : "success",
          "if" : "ctx?.http?.response?.status_code != null && ctx.http.response.status_code < 400"
        }
      },
      {
        "set" : {
          "field" : "event.outcome",
          "value" : "failure",
          "if" : "ctx?.http?.response?.status_code != null && ctx.http.response.status_code > 399"
        }
      },
      {
        "grok" : {
          "field" : "source.address",
          "ignore_missing" : true,
          "patterns" : [
            "^(%{IP:source.ip}|%{HOSTNAME:source.domain})$"
          ]
        }
      },
      {
        "rename" : {
          "field" : "@timestamp",
          "target_field" : "event.created"
        }
      },
      {
        "date" : {
          "field" : "apache.access.time",
          "target_field" : "@timestamp",
          "formats" : [
            "dd/MMM/yyyy:H:m:s Z"
          ],
          "ignore_failure" : true
        }
      },
      {
        "remove" : {
          "field" : "apache.access.time",
          "ignore_failure" : true
        }
      },
      {
        "user_agent" : {
          "ignore_failure" : true,
          "field" : "user_agent.original"
        }
      },
      {
        "geoip" : {
          "target_field" : "source.geo",
          "ignore_missing" : true,
          "field" : "source.ip"
        }
      },
      {
        "geoip" : {
          "database_file" : "GeoLite2-ASN.mmdb",
          "field" : "source.ip",
          "target_field" : "source.as",
          "properties" : [
            "asn",
            "organization_name"
          ],
          "ignore_missing" : true
        }
      },
      {
        "rename" : {
          "target_field" : "source.as.number",
          "ignore_missing" : true,
          "field" : "source.as.asn"
        }
      },
      {
        "rename" : {
          "field" : "source.as.organization_name",
          "target_field" : "source.as.organization.name",
          "ignore_missing" : true
        }
      },
      {
        "set" : {
          "field" : "tls.cipher",
          "value" : "{{apache.access.ssl.cipher}}",
          "ignore_empty_value" : true
        }
      },
      {
        "script" : {
          "if" : "ctx?.apache?.access?.ssl?.protocol != null",
          "source" : """def parts = ctx.apache.access.ssl.protocol.toLowerCase().splitOnToken("v"); if (parts.length != 2) {
  return;
} if (parts[1].contains(".")) {
  ctx.tls.version = parts[1];
} else {
  ctx.tls.version = parts[1] + ".0";
} ctx.tls.version_protocol = parts[0];""",
          "lang" : "painless"
        }
      },
      {
        "script" : {
          "lang" : "painless",
          "description" : "This script processor iterates over the whole document to remove fields with null values.",
          "source" : """void handleMap(Map map) {
  for (def x : map.values()) {
    if (x instanceof Map) {
        handleMap(x);
    } else if (x instanceof List) {
        handleList(x);
    }
  }
  map.values().removeIf(v -> v == null);
}
void handleList(List list) {
  for (def x : list) {
      if (x instanceof Map) {
          handleMap(x);
      } else if (x instanceof List) {
          handleList(x);
      }
  }
}
handleMap(ctx);
"""
        }
      }
    ],
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      }
    ]
  }
}

I have installed and uninstalled filebeat no of times.

Thanks for the information all that looks good! (except not working)

filebeat.inputs:

* type: log

# Unique ID among all inputs, an ID is required.

#id: my-filestream-id

# Change to true to enable this input configuration.

enabled: true <!----- DISABLE set to false

Did you disable the other input? If not that maybe the one reading it not the module.

There is something simple going on.. apologies I do not see it.

Can you share your filebeat.yml one more time?

you can try manually setting the pipeline

hosts: ["https://#.#.#.#:9200", "https://#.#.#.#:9200"]
  username: "manifest"
  password: "manifest"
  ssl.enabled: true
  ssl.verification_mode: none
  pipeline: filebeat-7.17.9-apache-access-pipeline <!--- HERE