How to check logs were sending to elasticsearch

I deployment my EFK stack on version 7.4.0. When deploying filebeat, logs from filebeat do not send to Elasticsearch.

My configuration Filebeat on k8s:

Logs from Filebeat:

Help me for this issue. Thanks ! :slight_smile:

Hi @Hoa_Nguy_n_Van ,

If you are just starting with Elastic Stack/Filebeat I suggest trying one of our currently supported versions like 7.17.x or 8.1.x.

That said, Filebeat should be getting its own logs too. How did you deploy Filebeat? Did you follow our documentation: Run Filebeat on Kubernetes | Filebeat Reference [8.1] | Elastic?

I have just tested using the manifest on our documentation with the same image as you (docker.elastic.co/beats/filebeat-oss:7.9.3) and it worked just fine.

So if you didn't quite follow the documentation, please try to use use it as unchanged as possible (just as a test). There are a number of configurations and mount points that need to be done in order for Filebeat to work well in a Kubernetes environment.

If none of this work, please post here the whole manifest and filebeat.yml (don't forget to redact any sensitive information like passwords, etc).

I tried this but not worked.

This all my whole manifest.

Hello,

You filebeat-kubernetes.yaml file shows;

- name: filebeat
        image: docker.elastic.co/beats/filebeat:8.1.3
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]

Could you please share the /etc/filebeat.yml details?

Best regards,

Of courses,

filebeat.inputs:

  • type: container
    paths:
    • /var/log/containers/*.log
      processors:
    • add_kubernetes_metadata:
      host: ${NODE_NAME}
      matchers:
      • logs_path:
        logs_path: "/var/log/containers/"

/# To enable hints based autodiscover, remove filebeat.inputs configuration and uncomment this:
/filebeat.autodiscover:
/# providers:
/# - type: kubernetes
/# node: ${NODE_NAME}
/# hints.enabled: true
/# hints.default_config:
/# type: container
/# paths:
/# - /var/log/containers/*${data.kubernetes.container.id}.log

processors:

  • add_cloud_metadata:
  • add_host_metadata:

cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}

output.elasticsearch:
hosts: ["elasticsearch-client.efk-xpack.svc.cluster.local:9200"]
username: "${ELASTICSEARCH_USERNAME}"
password: "${ELASTICSEARCH_PASSWORD}"

It seems ok, filebeat elastic output plugin should work, your filebeat logs say the harvesters start without issue, but they cannot find any files to open under the path.

does your filebeat user on docker can read the files inside this path? Does this path contain your *.log files?

Also I am sure its just a problem with copy pasting but, is this really how your filebeat.yml look like;

output.Elasticsearch:
hosts: ["Elasticsearch-client.efk-xpack.svc.cluster.local:9200"]
username: "${ELASTICSEARCH_USERNAME}"
password: "${ELASTICSEARCH_PASSWORD}"

There should be indentations, this is a yaml file, like this;

output.elasticsearch:
  hosts: ["https://myEShost:9200"]
  username: "YOUR_USERNAME"
  password: "YOUR_PASSWORD"

This is permisions to all files log.

The indentations are really important inside your filebeat.yml file can you please send it again but with using Preformatted text feature.
Best regards,


This is filebeat logs:


This is filebeat.yaml file.

I dont know why index of filebeat not created in kibana index management.

Your log files under paths: seems to be linkes to other files under another path, whats the premission settings on those folders? starting with /var/log/pods...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.