I have elasticsearch and kibana setup in my kubernetes cluster using ECK. I'm also trying to get filebeat setup. Im having trouble with getting filebeat to connect to kibana. in the filebeat logs i can see the following error.
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://data-kibana-kb-http:5601/api/status fails: fail to execute the HTTP GET request: Get http://data-kibana-kb-http:5601/api/status: x509: certificate signed by unknown authority. Response: .
If i exec into the pod and make a curl request to kibana host i get the following.
curl http://data-kibana-kb-http:5601/api/status
curl: (52) Empty reply from server
However if i supply the username and password and disable tls with -k i get the version detail.
curl -u elastic: -k https://data-kibana-kb-http:5601/api/status
{"name":"data-kibana","uuid":"8d692aff-383b-454a-b3c1-decc24bb5b6b","version":{"number":"7.8.0"}...
How do i get filebeat to make connection to kibana, if theres any other information you need me to provide just ask, i've been struggling with this for a few days now.
Filebeat
filebeat-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
templates:
- condition.equals:
kubernetes.labels.app: cgg-haproxy
config:
- module: haproxy
enabled: true
log:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
processors:
- add_cloud_metadata:
- add_host_metadata:
output.logstash:
enabled: false
hosts: '${LOGSTASH_URL}'
output.elasticsearch:
hosts: ['${ES_HOSTS}']
username: ${ES_USER}
password: ${ES_PASSWORD}
setup.kibana:
host: ${KIBANA_HOST}
setup.dashboards.enabled: true
setup.template.enabled: true
filebeat-daemonset.yml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.8.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: LOGSTASH_URL
value: "logstash:5044"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ES_HOSTS
value: "https://data-es-es-http:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: data-es-es-elastic-user
key: elastic
- name: KIBANA_HOST
value: "data-kibana-kb-http:5601"
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
Elasticsearch + Kibana
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: es-gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.8.0
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
annotations:
volume.beta.kubernetes.io/storage-class: es-gp2
spec:
accessModes:
- ReadWriteOnce
storageClassName: es-gp2
resources:
requests:
storage: 25Gi
podTemplate:
spec:
initContainers:
- name: install-plugin
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch repository-s3
- name: add-aws-keys
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: access-secret-key
command:
- sh
- -c
- |
echo $AWS_ACCESS_KEY_ID | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key
echo $AWS_SECRET_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.8.0
count: 1
elasticsearchRef:
name: data-es