Hello,
I am running my nginx container in kubernetes and using nginx module I am able to parse logs and show them nicely in kibana discover. I also see ingest pipelines for nginx in kibana. My setup is:
Filebeat daemonset -> Elasticsearch -> Kibana.
From the documentation I can see that nginx module should bring with its functionality also some dashboards but when I open them I see no data errors.
can someone advise what is the problem here ? Why are the data not showed in dashboards created by nginx module when the data is parsed by nginx module ?
I have basically 3 containers:
- nginx-ingress-controller - this is parsed using nginx module
- app-p-backend - this is parsed using custom ingest pipeline (see pipeline call call_pipelines)
- app-p-frontend - this is parsed using custom ingest pipeline (see pipeline call call_pipelines)
Regarding parsing everything is great, only problem I have that I dont have nginx data in dashboards for nginx module.
Here is my kubernetes configmap yaml:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-logging
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
contains:
kubernetes.container.name: "nginx-ingress-controller"
config:
- module: nginx
access:
input:
type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.container.name: "app-p-backend"
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
- condition:
contains:
kubernetes.container.name: "app-p-frontend"
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
filebeat.modules:
- module: nginx
fields:
logtype: kubernetes
kubernetes.cluster.name: xyz-p-aks-cluster
environment: develop
fields_under_root: true
setup.template.name: "logs_xyz_filebeat"
setup.template.pattern: "logs_xyz_filebeat-%{[agent.version]}-*"
setup.ilm.enabled: false
processors:
- add_cloud_metadata:
- add_host_metadata:
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
output.elasticsearch:
pipeline: call_pipelines
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
index: "logs_skv_filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.kibana:
host: "https://kibana_host:443"
username: "elastic"
password: ${ELASTICSEARCH_PASSWORD