I've noticed that Elasticsearch filter plugin wont work if you try to connect to a Elasticsearch cloud instance.
It seems it tries to append https:// twice (one already exists on cloud host resolution). Seems like a bug.
The elasticsearch outputs hosts option is of type uri. The code that fills in the hosts option based on the cloud id adds the schema, which is good.
The elasticsearch filter hosts option is type array, and I believe it expects those array entries to be "host:port" strings. The code that fills in the host option based on the cloud id is almost but not quite the same as the code from the mixin, which the filter does not use. It is still adding the schema, but I do not think it should at present. Of course, since the code is specific to the filter, not shared with the output, it is trivial to fix it by not adding the schema.
That said, there is an unmerged (and currently conflicted) PR from 2019 which would change the hosts option on the filter to accept
a valid RFC3986 URI with scheme, hostname, and optional port
an ipv4 address, optionally followed by a colon and port number
a hostname, optionally followed by a colon and port number
a square-bracketed ipv6 address, optionally followed by a colon and port number
a bare ipv6 address
so maybe the intent was to accept URIs. Who knows?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.