Metricbeat on kubernetes

I am a novice user and just started to explore metric beats. I wanted to monitor kubernetes, right now I am using the cloud trial version. The issue is I couldn't get the data from kubernetes into the elastic cloud.However, my local system logs are alone being ingested into the cloud. I followed the documentation provided on the elastic site but end up getting the same error and the logs are not being pushed.

In metribeat.yml file I changed my elastic cloud setting, provided with cloud id and cloud.auth and in output part i changed my elastic search output by providing the cloud elastic search host address in the host part and corresponding credentials.

In metricbeat-kubernetes.yaml , I changed the following,

output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:https://......:9243}:${ELASTICSEARCH_PORT:9243}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}

Deploy a Metricbeat instance per node for node metrics retrieval

apiVersion: extensions/v1beta1

env:
- name: ELASTICSEARCH_HOST
value: https://......:9243
- name: ELASTICSEARCH_PORT
value: "9243"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: $$$$
- name: ELASTIC_CLOUD_ID
value: $$$:#####
- name: ELASTIC_CLOUD_AUTH
value: @@@:####

similarly in # Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics

when i try to deploy on kubernetes, i get the following error.

kubectl create -f metricbeat-kubernetes.yaml
W1220 12:12:38.296990 1982 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested resource), falling back to swagger

Can somebody please help ? :frowning:

Hi @kanthimathi!

I have noticed something wrong in your settings:

- name: ELASTICSEARCH_HOST
  value: https://......:9243

:Port should be removed from ELASTICSEARCH_HOST, as it's defined in ELASTICSEARCH_PORT.

Also, could you paste the full output for kubectl create command? I think the swagger message is just a warning

This is the message i get after running kubectl create command,

kubectl create -f metricbeat-kubernetes.yaml W1220 13:16:09.739901 2256 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested resource), falling back to swagger Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": configmaps "metricbeat-config" already exists Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": configmaps "metricbeat-daemonset-modules" already exists Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": daemonsets.extensions "metricbeat" already exists Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": configmaps "metricbeat-deployment-modules" already exists Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": deployments.extensions "metricbeat" already exists Error from server (AlreadyExists): error when creating "metricbeat-kubernetes.yaml": serviceaccounts "metricbeat" already exists

I did edit the port as u mentioned but still i couldn't find the kuberenetes log in cloud. However, i can view my local host logs

The problem now is that you already deployed the wrong conf (new error says Error from server (AlreadyExists)). You can fix this by removing and creating metricbeat again:

kubectl delete -f metricbeat-kubernetes.yaml
kubectl create -f metricbeat-kubernetes.yaml

Best regards

Thanks for the timely help

kubectl create -f metricbeat-kubernetes.yaml W1220 13:35:46.373417 2431 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested resource), falling back to swagger configmap "metricbeat-config" created configmap "metricbeat-daemonset-modules" created daemonset "metricbeat" created configmap "metricbeat-deployment-modules" created deployment "metricbeat" created serviceaccount "metricbeat" created

still the same issue :frowning: no logs in cloud

Thanks,

You will probably want to see what's going on in the logs, you can check them by following these steps:

  • List failing pods with kubectl get pod --namespace=kube-system
  • Chose one of the metricbeat pods
  • Get logs from it: kubectl logs --namespace=kube-system <metricbeat-pod-name>

Best regards

when i tried to do so

Error from server (BadRequest): container "metricbeat" in pod "metricbeat-2s5tb" is waiting to start: trying and failing to pull image

got this message.

Uhm, error says docker pull is failing for metricbeat image, could you please dump the result of kubectl describe --namespace=kube-system po/metricbeat-2s5tb?


Got this

Ok you got the docs from master, which includes a non released version, could you please replace all image instances and change 7.0.0-alpha1 to 6.1.1?

Then you will need to recreate, with delete + create

1 Like

finally made it :smiley: thanks a ton

Thanks for reporting! I've created an issue in our repo to fix this behavior when you are using master docs: https://github.com/elastic/beats/issues/5930

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.