Attempting to Connect Fluentd with Elastic8.1.2 gives an error for Developing an ingestion pipeline

So i saw the example Elastic8.1.2 cluster set up with Kibana on the website. I liked it, though i wanted to add an ingestion pipe and added a FluentD container to the mix in order to manage ingestion a bit more. The base set up i saw was at: Install Elasticsearch with Docker | Elasticsearch Guide [8.1] | Elastic

So i took this and shrunk it ever so slightly, by just doing a Single Node Cluster instead of the sample 3, which was straight forward. 1 Elasticsearch, 1 Kibana, 1 FluentD. Still works fine. You can see an example at GitHub - fallenreaper/secure-EFK: Implementing a sample secure EFK Docker Instance leveraging exposed public data and tinkering., but the F part seems to not connect. I see the Elastic will properly stand up, as will Kibana. FluentD will standup but return a Could not communicate to Elasticsearch, resetting connection and trying again. EOFError (EOFError)

I have a stackoverflow question up for this at: docker - Trying to add FluentD to my workflow but it fails to connect - Stack Overflow as well.

So I am trying to figure out where the cause is. It quite likely is that the FluentD conf is set up wrong, but given that EFK is in fact a thing, if someone has an example of 8.1.2 working with FluentD and give me some pointers.

I feel that I am close but I cant figure out the problem at hand. I know that the plugin itself is not maintained by you all, but in a similar vein, creating the plugins to emphasize the integration with Elastic is in the best interest of everyone.

Is someone able to give me some insights?

I wanted to add a note:
I have have been using an older version of EFK Clusters i have put together, which seems to use the same version of FluentD I am using, 1.12-debian-1, but the difference is the Elasticsearch version. THOUGH, i do have security in place, but i think that I did it correctly, in order to be properly verified.

  • Disclaimer: not a user of FluentD ... but hopefully the following helps:

Maybe this is an issue between HTTP/HTTPS configuration? I see in the FluentD config you have ssl_verify and ca_file but browsing the F docs, the default scheme is HTTP and should be set for "https" to match your ES configuration. I recall a couple of Logstash plugins having this same sort of requirement which is easy to overlook and made me think it could be the same issue here. :crossed_fingers:

Along the same thought lines ( and also not a fluentd expert) I would do is shell into the fluentd container and try to curl Elasticsearch with the Elasticsearch endpoint and credentials you're configuring fluentd with and validate.

As the error says it cannot connect so it's could be a combination of connectivity / config and/or the security that you put in place.

@stephenb Yeah, I had a similar thought to use, it was saying that i am getting a security exception.

apt-get update
apt-get install -y curl
curl -cacert /usr/share/fluentd/certs/ca/ca.crt https://elasticsearch:9200
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","Bearer realm=\"security\"","ApiKey"]}},"status":401}

So there does seem to be at least a connectivity response:

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm="security" charset="UTF-8"","Bearer realm="security"","ApiKey"]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm="security" charset="UTF-8"","Bearer realm="security"","ApiKey"]}},"status":401}

So it made me think that i DID in fact need the user/password for Basic auth to log in, so when adding that to Fluentd's Conf file, it did seem to work. I was thinking that the certs would have been enough for the handshake.

My logic for some reason was that with the certs, I didnt need to add user/password to the conf. Given that I saw that the there was the sample using the bin/elasticsearch-certutil, that I would be able to specifically pass in a cert generated for FluentD and it would follow the flow. BUT i guess i must have been mistaken.

1 Like

Think of certs as being security at the connection layer and your credentials are security at the authentication layer.

After creating your certs, you need to make sure they are all properly referenced from within the FluentD or ingestion pipeline such that it will be able to push the correct credential to verify. This seemed to be the use case, and after applying and adding the correct information to the fluent.conf file, it does infact ingest.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.