Elasticsearch Cluster not reachable by Logstash

Hi,

I am using cluster name in logstash elasticsearch output plugin.
But it is failing to send data to the cluster. Nevertheless, I could reach the server when I specify the hostname.

I would like to know if I have to change something for cluster name to work.

Thanks.

It helps if you paste your config.

Hi @warkolm ,
My config:

output {
        elasticsearch {
                #host => "abc.xyz.com"
                cluster => "elasticsearch.prod"
                protocol => "http"
                index => "logstash-%{+YYYY.MM.dd}"
        }
}

The code works only when I uncomment host line.

That's expected, how else will it know where it needs to connect to?

I thought it would resolve the host names based on the cluster name and connect any one of them.
Isn't it the way?

Only with multicast protocols - node or transport.

Okay. Logstash and Elasticsearch combination work any differently if I change to these protocols?
I see that http is generally recommended.

No, it'll all work.

I tried reading about the protocols. But not sure I understand when should I use which protocol?

https://www.elastic.co/guide/en/elasticsearch/guide/current/_transport_client_versus_node_client.html may help.

But I find it's easier to just stick with HTTP :slight_smile:

@warkolm
But with HTTP, I am unable to use only cluster name.

Also, I observed one thing though, when I have both cluster name and hostname then even if the elasticsearch host goes down, the logstash still is alive. Can you explain that?

If you use HTTP you need to specify the host only, clustername is irrelevant.

If you don't specify any protocol then it uses node by default, and all you need there is the clustername.

As I understand from the link you shared above, node protocol works when the application is running from one of the Elasticsearch nodes only.

My concern is about a remote server trying to send data to ES using logstash. And don't want to hardcode the hostname, as we might need to bounce it now and then. And Logstash should use other servers in the cluster in that time. Can I do this?

Put an ES client node on the LS host, then point LS to localhost is a good idea.

Yeah I thought through this. I don't want anything other than a shipping agent/logstash to work on that remote host. I don't want to maintain ES on the remote.

Specify an array of nodes in the host setting.
Or use a DNS cname that points to multiple end points.

No matter what you chose there is overhead, you cannot get around that :slight_smile:

I can use that I guess and live with the overhead :wink:

I checked that host setting is a string in Logstash 1.4 and array in 1.5.