ELK Stack: Logstash shows that it's receiving log entries from Filebeat, but Elasticsearch is not creating my index

I am new to the ELK stack and I wanted to try and test it out to see if I wanted to use it. I have elasticsearch, kibana, and logstash installed on one virtual machine and I have filebeat and nginx installed on another virtual machine.

I have a custom log format for my nginx access.log that looks like this:

<IP> - - [21/Dec/2023:00:46:10 +0000] "GET /favicon.ico HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.1 Safari/605.1.15" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

#log format

log_format  main_ext  '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'"$host" sn="$server_name" '
'rt=$request_time '
'ua="$upstream_addr" us="$upstream_status" '
'ut="$upstream_response_time" ul="$upstream_response_length" '
'cs=$upstream_cache_status' ;

I have everything configured and have the Kibana dashboard up and running with data being sent to the dashboard. The only problem I am having is that the correct indexes are not showing up in elasticsearch or my kibana dashboard. The only index that is showing up is the default filebeat-* and I am not able to see my nginx-access-logs or nginx-error-logs indices.

Here is my logstash config file path /etc/logstash/conf.d/beats.conf and here is what it looks like:

input {
  beats {
    port => 5044
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGLINE}" }
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

  if [type] == "nginxaccess" {
    grok {
      match => { "message" => '%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] "%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:bytes} "%{URI:referrer}" "%{DATA:agent}" "%{IPORHOST:x_forwarded_for}" sn="%{DATA:sn}" rt=%{NUMBER:request_time} ua="%{DATA:upstream_addr}" us="%{DATA:upstream_status}" ut="%{DATA:upstream_response_time}" ul="%{DATA:upstream_response_length}" cs=%{DATA:upstream_cache_status}' }
    }
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z", "ISO8601" ]
    }
  }

  if [type] == "nginxerror" {
    grok {
      match => { "message" => '%{TIMESTAMP_ISO8601:timestamp} \[%{WORD:log_level}\] %{NUMBER:pid}#%{NUMBER:tid}: %{GREEDYDATA:message}' }
    }
    date {
      match => [ "timestamp", "yyyy/MM/dd HH:mm:ss" ]
    }
  }
}



output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

And here is what my etc/filebeat/filebeat.yml file looks like:

filebeat.inputs:

#Each - is an input. Most options can be set at the input level, so
#you can use different inputs for various configurations.
#Below are the input-specific configurations.

#filestream is an input for collecting log messages from files.
- type: filestream
  #Unique ID among all inputs, an ID is required.
  id: my-filestream-id
  #Change to true to enable this input configuration.
  enabled: true
  #Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
  fields:
    type: syslog

- type: filestream
  id: nginx-access-logs
  enabled: true
  paths:
    - /var/log/nginx/access.log*
  fields:
    type: nginxaccess  # Set the log type to nginx_access
    beat: nginxaccess

- type: filestream
  id: nginx-error-logs
  enabled: true
  paths:
    - /var/log/nginx/error.log*
  fields:
    type: nginxerror  # Set the log type to nginx_error
    beat: nginxerror

filebeat.config.modules:
  #Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  #Set to true to enable config reloading
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:

output.logstash:
  #The Logstash hosts
  hosts: ["<IP>:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Now after configuring those files I restarted both logstash and filebeat and ran the commands:

`sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=[":9200"]'

sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=[':9200'] -E setup.kibana.host=:5601`

Then I went and checked my indexes and my nginx-access-logs and nginx-error-logs were still not showing in my indices. The only ones that show are the default filebeat-*

curl -X GET "http://localhost:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size yellow open filebeat-2023.12.20 _nckH4WZSmuaI1umnjVR-w 1 1 91772 0 36.1mb 36.1mb 36.1mb yellow open filebeat-2023.12.14 yA8Sl4lXSYG_vB8d67Cqfg 1 1 47876 0 19.6mb 19.6mb 19.6mb yellow open filebeat-2023.12.15 OwUwZMdBR3myvZkuvUgF2A 1 1 75513 0 28.5mb 28.5mb 28.5mb yellow open .ds-filebeat-8.11.2-2023.12.08-000001 PPRThZq3RIK490NVmW605A 1 1 0 0 249b 249b 249b yellow open filebeat-2023.12.16 4EOUSNCKRlOzAy1zdih6hg 1 1 79795 0 29.4mb 29.4mb 29.4mb yellow open filebeat-2023.12.17 JL7TRkUgTzeGbT-M0bBD5g 1 1 64067 0 24.4mb 24.4mb 24.4mb yellow open filebeat-2023.12.10 m3aWcEayTnu3r_iTxd_5aA 1 1 77669 0 27.9mb 27.9mb 27.9mb yellow open filebeat-2023.12.21 mvbk8wNiQ9-rT9Vs-W-Vqg 1 1 62321 0 27.9mb 27.9mb 27.9mb yellow open filebeat-2023.12.11 bXq4al_xQ62eMjAnEKR5Xw 1 1 81750 0 29mb 29mb 29mb yellow open filebeat-2023.12.12 V2ojtGRTR4ixSGT_tgkhHg 1 1 70454 0 27mb 27mb 27mb yellow open filebeat-2023.12.13 eRuR2uf2QdqF00VagDnjpw 1 1 72317 0 27.4mb 27.4mb 27.4mb yellow open filebeat-2023.12.18 Q_IEBhszSOSK9305LsvXOg 1 1 82494 0 30.5mb 30.5mb 30.5mb yellow open filebeat-2023.12.19 KsJZ2um5Q8e7v2ckP9MlGA 1 1 77330 0 29.6mb 29.6mb 29.6mb yellow open filebeat-2023.12.08 C8ih6TUMRdm2AsSO5idwkw 1 1 13953 0 5.8mb 5.8mb 5.8mb yellow open filebeat-2023.12.09 2Taw_nROSCiBeYJFk-kyXA 1 1 58190 0 21.2mb 21.2mb 21.2mb

Can someone please help me figure out what I'm doing wrong or what is going on. I am lost at this point!

Hi @BDeveloper Welcome to the community!

This is your output section in logstash.

You beats input is coming from filebeat

So that translates to

filebeat-2023.12.12 or whatever the date is ..

So the system appears to be working exactly as defined.

There is nothing in your config that would send the data to anything other than the the filebeat-YYYY.MM.dd index.

Can you help me understand why you think it should go elsewhere?

I suspect there is a misunderstanding.

1 Like

Hi @stephenb, thank you for your response! I think I might be confused and I am just configuring this wrong. I am confused why this part of my logstash beats.conf file:

if [type] == "nginxaccess" {
    grok {
      match => { "message" => '%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] "%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:bytes} "%{URI:referrer}" "%{DATA:agent}" "%{IPORHOST:x_forwarded_for}" sn="%{DATA:sn}" rt=%{NUMBER:request_time} ua="%{DATA:upstream_addr}" us="%{DATA:upstream_status}" ut="%{DATA:upstream_response_time}" ul="%{DATA:upstream_response_length}" cs=%{DATA:upstream_cache_status}' }
    }
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z", "ISO8601" ]
    }
  }

Is not parsing correctly. My overall goal here is to be able to search a field like, 'upstream_response_time > 2' or 'request_time > 1' on my kibana dashboard and see all of the nginx access.log with a request time greater than one.

Right now my index filebeat-2023.12.22 mapping looks like this:

{
  "mappings": {
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "@version": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "agent": {
        "properties": {
          "ephemeral_id": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "id": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "name": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "type": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "version": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "ecs": {
        "properties": {
          "version": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "event": {
        "properties": {
          "dataset": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "module": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "original": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "timezone": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "fields": {
        "properties": {
          "beat": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "type": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "fileset": {
        "properties": {
          "name": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "host": {
        "properties": {
          "architecture": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "containerized": {
            "type": "boolean"
          },
          "hostname": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "id": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "ip": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "mac": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "name": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "os": {
            "properties": {
              "codename": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "family": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "kernel": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "name": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "platform": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "type": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "version": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              }
            }
          }
        }
      },
      "input": {
        "properties": {
          "type": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "log": {
        "properties": {
          "file": {
            "properties": {
              "device_id": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "inode": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              },
              "path": {
                "type": "text",
                "fields": {
                  "keyword": {
                    "type": "keyword",
                    "ignore_above": 256
                  }
                }
              }
            }
          },
          "offset": {
            "type": "long"
          }
        }
      },
      "message": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "service": {
        "properties": {
          "type": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "tags": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      }
    }
  }
}

Right now my nginx access.logs are just showing up in the message field and I am not able to search within my custom log format because it isn not parsed correctly:

Can you help me get to the point where my custom log format is parsed and indexed so I can search by those fields (like 'upstream_response_time > 2' or 'request_time > 1') on the Kibana Dashboard?

Hi @BDeveloper

Pretty sure There are a couple issues... I think we can work through them but let's gather a bit more info.

  1. what version are you on and what documentation are you following?

  2. Did you customize your nginx logs format? If not there is an OOTB module that will do all this.

  3. Not sure what dashboards you are referring to and how you loaded them but unless you parse and map the fields to the expected value the dashboard will not populate. Did you enable a module?

  4. Your setup commands are not working I will show you how. I can tell this because your mapping is wrong / default that will cause issues later.

  5. Can you provide a couple samples / log lined of your access and error logs .. your can anonymize the IPS.

Perhaps I can take a look later

Hi @stephenb,

  1. I am using version 8.x and I followed these steps to set up my ELK stack except I replaced the apache parts with nginx configuration. Here is the documentation I followed: https://portforwarded.com/install-elastic-elk-stack-8-x-on-ubuntu-22-04-lts/

  2. Yes I customized my nginx log format. Here is what my nginx.conf file looks like:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##
        
        log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" '
                        '$status $body_bytes_sent "$http_referer" '
                        '"$http_user_agent" "$http_x_forwarded_for" '
                        '"$host" sn="$server_name" '
                        'rt=$request_time '
                        'ua="$upstream_addr" us="$upstream_status" '
                        'ut="$upstream_response_time" ul="$upstream_response_length" '
                        'cs=$upstream_cache_status';

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

Here is my /etc/nginx/sites-enabled/ file:

server {
       listen 81;
       listen [::]:81;

       server_name test.com;

       root /var/www/test;
       index index.html;

       access_log /var/log/nginx/access.log main_ext;

       location / {
               try_files $uri $uri/ =404;
       }
}

  1. I don't think I correctly parsed and mapped the custom fields because I thought that was what my Logstash beats.conf file was doing. When I go to Observability -> Logs -> Stream I can see my access logs but they are in the 'message' field and I am not able to search by my custom fields like 'rt = 0.86' for example.

When I go to my discover page I thought I would be able to see those custom fields that I defined in my Grok pattern in my logstash beats.conf file but they do not show up.

grok {
      match => { "message" => '%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] "%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:bytes} "%{URI:referrer}" "%{DATA:agent}" "%{IPORHOST:x_forwarded_for}" sn="%{DATA:sn}" rt=%{NUMBER:request_time} ua="%{DATA:upstream_addr}" us="%{DATA:upstream_status}" ut="%{DATA:upstream_response_time}" ul="%{DATA:upstream_response_length}" cs=%{DATA:upstream_cache_status}' }
    }
  1. Here are a couple sample log lines from my access.log:
<IP> - - [22/Dec/2023:02:54:23 +0000] "MGLNDD_<IP>" 400 166 "-" "-" "-" "test.com" sn="test.com" rt=0.067 ua="-" us="-" ut="-" ul="-" cs=-

<IP> - - [22/Dec/2023:02:54:36 +0000] "GET /.env HTTP/1.1" 404 197 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36"

<IP> - - [22/Dec/2023:02:54:37 +0000] "POST / HTTP/1.1" 405 568 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36"

<IP> - - [22/Dec/2023:14:58:13 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

<IP> - - [22/Dec/2023:14:58:13 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

<IP> - - [22/Dec/2023:14:58:14 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

<IP> - - [22/Dec/2023:14:58:14 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "-" "<IP>" sn="test.com" rt=0.000 ua="-" us="-" ut="-" ul="-" cs=-

Thank you so much for working through this with me!

Hi @BDeveloper

Hmmm Ok, I / We do not "debug" 3rd party instructions... quick glance looks sorta ok... it also does not show you how to debug etc..etc... nothing in there about custom parsing etc. It's not the best / correct path if your Nginx logs are OOTB (there is an easier way). Yes, I understand yours are customized....

So Important Question
a) Do you WANT or NEED to run Logstash? If so Why?
or
B) Are you just running logstash because you think you need to because of the article?

Because you do not need to run logstash to ship and parse these files...

Log Files -> Filebeat -> (Parse with Ingest Pipeline Parse) Elasticsearch

or

Log Files -> Filebeat -> Parse Logstash -> Elasticsearch

Do you have a preference? Running Logstash adds another complexity... do you really need / want it?

Also you are showing at least 3 different formats so there will need to be at least 3 GROK parsers ... we can get to that after you answer the above question.

Hi @stephenb,

Ok, I guess I really don't need to run Logstash if I don't have to, I just thought I needed to based off the article. Really all I am wanting to do ship and parse those log files and then be able to easily review the log files on Kibana.

If you think this route: Log Files -> Filebeat -> (Parse with Ingest Pipeline Parse) Elasticsearch is the most efficient then I think that route would be best.

Hi @BDeveloper

Ok let's do this so perhaps it'll help someone else.

Can you please open a new topic with subject like

"Help parsing custom nginx logs"

Refer to using filebeat plus ingest pipeline.

Put sample of your logs in there like you have in this one... Please can you just replace the IPs with a number like 192.168.0.1 or something so there is no confusion... We don't need you nginx configuration.

Then I will respond to get your started in a simpler way ..

I will close this topic.

Hi @stephenb,

I just wanted to let you know that I created a new topic!

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.