I just upgraded my stack to 7.7
Now, I see that I have 2 pipelines for nginx access logs:
filebeat-7.7.0-nginx-access-default
description: "Pipeline for parsing Nginx access logs. Requires the geoip and user_agent plugins."
filebeat-7.7.0-nginx-ingress_controller-pipeline
description: "Pipeline for parsing Nginx ingress controller access logs. Requires the geoip and user_agent plugins."
What is that new "ingress_controller-pipeline"?
And now here is what I see in the Kibana "Discover":
event.dataset: nginx.ingress_controller
message: noty.propovednik.com 2a01:4f8:161:7181::2 - - [13/May/2020:06:19:54 +0000] "GET /index.php/res/Public/%D0%93/%D0%93%D0%BE%D1%81%D0%BF%D0%BE%D0%B4%D1%8C%20%D0%B4%D0%B0%D0%B9%20%D0%B6%D0%B8%D0%B7%D0%BD%D1%8C%20%D0%B4%D1%83%D1%88%D0%B5%20%D0%BC%D0%BE%D0%B5%D0%B9/Public/S/Public/O/Oh,%20What%20a%20Change.nwc HTTP/1.1" 301 178 "-" "Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/)"
error.message: Provided Grok expressions do not match field value: [noty.propovednik.com 2a01:4f8:161:7181::2 - - [13/May/2020:06:19:54 +0000] \"GET /index.php/res/Public/%D0%93/%D0%93%D0%BE%D1%81%D0%BF%D0%BE%D0%B4%D1%8C%20%D0%B4%D0%B0%D0%B9%20%D0%B6%D0%B8%D0%B7%D0%BD%D1%8C%20%D0%B4%D1%83%D1%88%D0%B5%20%D0%BC%D0%BE%D0%B5%D0%B9/Public/S/Public/O/Oh,%20What%20a%20Change.nwc HTTP/1.1\" 301 178 \"-\" \"Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/)\"]
hm, may be it's not broken.
looks like that ingress controller is a Kubernetes thing:
But I don't have anything to do with Kubernetes. I just have standalone Ubuntu web-server.
Looks like that nginx.ingress_controller
things got auto-enabled during upgrade somehow?
Looks to be caused by this MR:
elastic:master
← ChrsMark:ingress_nginx
opened 03:20PM - 07 Feb 20 UTC
Can be disabled like this:
- module: nginx
ingress_controller:
enabled: false
IMHO, ingress_controller should have been disabled by default. Or, at least, not automatically added during upgrade.
Hi Slavik,
I have some problems with the update two of them are in that post, but I'm not having lucky with the answers.
Hi!
I was using Kubernetes with this version of Filebeat: docker.elastic.co/beats/filebeat:7.3.2
In order to use the new functionalities to parser nginx-ingress-controller I've update to docker.elastic.co/beats/filebeat:7.7.0
But with my configuration, only changing the image I have some errors.
2020-05-13T14:26:25.084Z ERROR [kubernetes] add_kubernetes_metadata/matchers.go:91 Error extracting container id - source value does not contain matcher's logs_path '/var/lib/docker/containers/'.
I …
I'm very interested of your update. Do you use
add_kubernetes_metadata:
With the update from 7.3 to 7.7 I cannot use this piece
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
How did you solve it?
Thank you very much
jsoriano
(Jaime Soriano)
May 28, 2020, 1:45pm
4
There was an unexpected regression in 7.7.0, a possible workaround is to disable default matchers:
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
default_matchers.enabled: false
matchers:
- logs_path:
logs_path: "/var/log/containers/"
I have opened a PR to revert to pre-7.7.0 behaviour: Use indexers and matchers in config when defaults are enabled by jsoriano · Pull Request #18818 · elastic/beats · GitHub
system
(system)
Closed
June 25, 2020, 1:45pm
5
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.