Windows 10 PRO with WSL1/Ubuntu18.04/Terminator and Docker Desktop
Everything works under ECK. With the following files, all pods are running 1/1.
To generate logs with Nginx, I just do F5 or Ctrl+F5 on the Welcome to nginx page.
The autodiscover configuration I propose in the filebeat.yaml file is not reliable, but I don't see where the problem comes from. Depending on the changes in the filebeat.yaml file, I retrieve, at best, data from the namespace beats (from Elasticsearch, Filebeat itself...) but never access.log or error.log data from Nginx. With the following files, here is what I get if I check my Elasticsearch indices:
$▶ curl -k https://localhost:9200/_cat/indices
green open .kibana-event-log-7.8.0-000001 ScLrz5y5RTifotD2QtY3pQ 1 0 1 0 5.3kb 5.3kb
green open .security-7 WboPPeRSQ4ulU_UQH0PfCw 1 0 37 0 125.5kb 125.5kb
green open .apm-custom-link CYNj4646QjaBYmtha7Rtqw 1 0 0 0 208b 208b
green open .kibana_task_manager_1 -gCCmjnJQHKaWfni1GjyNg 1 0 5 0 47kb 47kb
green open .apm-agent-configuration ndyJ7ivmTWG4WL-vBVL4eg 1 0 0 0 208b 208b
green open .kibana_1 c8WZce4IRWqn1WNKwwoBfA 1 0 4 0 31.4kb 31.4kb
Sometimes, data with the nginx_test tag is found in Kibana but never the error or access tags.
If it helps, here's what I get when I check the state of the Kubernetes objects after starting the stack:
$▶ sh check_all.sh
----- Statefulsets -----
NAME READY AGE
elasticsearch-es-elasticsearch 1/1 5m27s
----- Deployments -----
NAME READY UP-TO-DATE AVAILABLE AGE
kibana-kb 1/1 1 1 5m28s
my-nginx 1/1 1 1 5m29s
----- Config Map -----
NAME DATA AGE
elasticsearch-es-scripts 3 5m30s
elasticsearch-es-unicast-hosts 1 5m28s
filebeat-config 1 5m31s
logstash-configmap 2 5m30s
----- Services -----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-es-elasticsearch ClusterIP None <none> <none> 5m28s
elasticsearch-es-http ClusterIP 10.105.232.75 <none> 9200/TCP 5m31s
elasticsearch-es-transport ClusterIP None <none> 9300/TCP 5m31s
kibana-kb-http LoadBalancer 10.105.225.226 localhost 5601:32721/TCP 5m30s
logstash ClusterIP 10.101.229.220 <none> 25826/TCP,5044/TCP 5m31s
my-nginx LoadBalancer 10.102.102.119 localhost 80:31618/TCP 5m30s
----- Daemon Set -----
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
filebeat 1 1 1 1 1 <none> 5m32s
----- Pods -----
NAME READY STATUS RESTARTS AGE
elasticsearch-es-elasticsearch-0 1/1 Running 0 5m28s
filebeat-tmh7l 1/1 Running 0 5m31s
kibana-kb-f84d496df-kclsh 1/1 Running 0 5m29s
logstash 1/1 Running 0 5m31s
my-nginx-ff88c49d-nbp72 1/1 Running 0 5m30s
----- Storage Class -----
NAME PROVISIONER AGE
es-data kubernetes.io/no-provisioner 5m30s
hostpath (default) docker.io/hostpath 17d
nginx-data kubernetes.io/no-provisioner 5m30s
----- Volumes -----
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
es-data-pv 5Gi RWO Retain Bound beats/elasticsearch-data-elasticsearch-es-elasticsearch-0 es-data 5m30s
nginx-data-pv 5Gi RWO Retain Bound beats/nginx-data-pvc 5m30s
----- PVC -----
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-elasticsearch-es-elasticsearch-0 Bound es-data-pv 5Gi RWO es-data 5m28s
nginx-data-pvc Bound nginx-data-pv 5Gi RWO 5m30s
----- PW -----
PW_FOR_USING_KIBANA
What is certain is that the Nginx logs do not pass (I can't find the error and access tags in Kibana). For the rest, it's variable... In fact, yesterday I was retrieving data from Elasticsearch, Filebeat, Kibana... from namespace beats but nothing from Nginx. This morning, after updating Docker Desktop + rebooting the PC, I have nothing left with the same code snippets (this is why I sais my code is not reliable).
As a result, I tried with something like this instead of the autodiscover feature:
I just made a last test for tonight and without having modified anything I recovered data again (like yesterday) but nothing about Nginx.
I don't know if this answers Marcin's question, but here is a series of screenshots made from what I've just recovered in Kibana:
I'm coming back here to post a link where you can find the filebeat.yaml and volume.yaml files with which I solved my problem with Nginx's access.log and error.log data... if that helps anyone.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.