404 Not found only for filebeat


We are trying to send our filebeat logs to an elasticsearch cluster but we get this error:

2020-12-16T15:32:37.816+0100    ERROR   [esclientleg]   eslegclient/connection.go:261   error connecting to Elasticsearch at https://somecluster.internal.some.domain:443/elk-netflow/elasticsearch/: 404 Not Found: 
2020-12-16T15:32:37.816+0100    ERROR   fileset/factory.go:134  Error loading pipeline: Error creating Elasticsearch client: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at https://somecluster.internal.some.domain:443/elk-netflow/elasticsearch/: 404 Not Found: ]

We can "curl" from the filebeat server to that URL and we do get the expected "you know, for search" response. Any idea what could be the problem?


    # ---------------------------- Elasticsearch Output ----------------------------
      hosts: ["https://somecluster.internal.some.domain:443"]
      protocol: "https"
      path: "/elk-netflow/elasticsearch/"
      username: "someuser"
      password: "somepassword"
      ssl.verification_mode: "none" #We tried with all the options here
      ssl.certificate_authorities: ["/etc/pki/root_der.pem","/etc/pki/ssl_der.pem"]

The filebeat is executing in a virtual machine and is installed as a systemd service. On the other hand, our elasticsearch/kibana combo is a cluster that is deployed using the ECK operator. Hence the "path" in use is not the default one for Elasticsearch. Nontheless, this works fine:
curl --cacert /etc/pki/root_der.pem -u someuser:somepassword https://somecluster.internal.some.domain:443/elk-netflow/elasticsearch/

Change the 443 port to 9200 :slight_smile:

Hi warkolm!

We need it to be 443 :confused: After reading this doc we understood that 443 could be used for this purpose and since the curl command to that URL (port included) is working as expected when launched from the filebeat machine, we also expected to work.

In that URL we actually have our gateway and all I can do, is publish through it via https, 443 and a specific path. Later on, based on that path the traffic is directed to the proper service which in this case is the elasticsearch service which is configured to listen to 9200. Again, this works good with everything (including kibana web access, connecting to elasticsearch via curl or web browser) but it gives this error when using filebeat output to es.

Any help is welcome...i'm quite lost with this one :confused:

Hi all!

Just bumping this question. Could anyone please advice or provide any hint about how to troubleshoot this? I'm quite lost at this point and any help is more than welcom.

Thank you!

Did you tried to add proxy_url ?

  hosts: ['ip:9200']
  proxy_url: 'https://somecluster.internal.some.domain:443'
  path: "/elk-netflow/elasticsearch/"
1 Like


It looks like the proxy_url is to specify a forward proxy, right? As per your comment (thank you!) I did tried it just in case but it still fails. Also, in the case of kubernetes, that ip would be internal to k8s, either the ip of the service or by using url, the name of the pods making them not resolvable from outside k8s cluster.

I'm starting to doubt that is actually possible to have a filebeat outside a k8s cluster directly talking with an elasticsearch inside a k8s cluster at all when an ingress (in my case ambassador) is being used.

we had the same confluent-bit problem, you can try to modify your ambassador hostname by adding the port, for exemple:

  kind:  Mapping
  name:  elasticsearch.exeple.com
  prefix: /
  host: elasticsearch.exemple.com:80 or etc (the port that you expose).
  service: elasticsearch.svc.name:9200
1 Like

I am also stuck at the same place. Same scenario where Elastic and filebeat are in different clusters.
Were you able to solve this ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.