I get this error when I deploy filebeat. it was tested before with 6.6.1 version and was working fine but now the same yaml is giving this error.
ERR Failed to connect: Connection marked as failed because the onConnect callback failed: Error loading Elasticsearch template: could not load template: couldn't load template: couldn't load json. Error: 400 Bad Request
Hi @ishu52 and welcome
What version of filebeat are you using? Filebeat < 6.7.0 versions are incompatible with Elasticsearch 7.0.
You can check the supported versions matrix in https://www.elastic.co/es/support/matrix#matrix_compatibility
the version was 6.0.1. Now I pointed to 7.0.0 and getting below error. Pod is going to 'CrashLoopBackOff' state
As the error message says,
filebeat.config.prospectors has been removed. Prospectors were removed to inputs and both options were accepted during a time. Now you have to use
Take a look to the release notes, this and other breaking changes are listed there.
Thank you @jsoriano.
It is working. but compared to previous version 6.0.1, with 7.0.0 i cannot see any data on fields like namespace, container name, requestid etc.
It is just showing index type, index id, message
I have removed filebeat.config.prospectors and added filebeat.input.
Now i want kubernetes log to be forwarded for which in prospectors config 'add_kubernetes_metadata' was added. with latest 7.0.0 ,I am adding preprocessor in filebeat.yml
Output: Not able to read logs from kubernetes pods
Expectation: read logs from kubernetes pods
Are your pod logs in
/var/data/kubeletlogs? Are these files accessible from the filebeat pod?
You can also use autodiscover to collect logs from all your pods, using autodiscover also adds the metadata to the messages of each pod automatically. But this may require further configuration if you don't have your logs in
Yes logs are available at var/data/kubeletlogs and are getting harvested as well.
Since the cluster is containerd so logs are not coming to var/lib/docker/containers
@ishu52 you mentioned that with 6.0.1 you were being able to collect kubernetes metadata (namespace, container...), what configuration were you using for that?
I was using prospectors config.
As suggested by you, I tried with Autodiscover feature and it is working fine now.
Good to read that this is working for you now, could you please share the autodiscover configuration that works for you in case someone else finds this issue in the future?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.