Elastic Search Cluster Doubts

Hi

I have setup a 3 Node Elastic Search Cluster. This is the detail of my cluster :-

{
"cluster_name" : "pokemon",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 1,
"active_shards" : 2,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}

So i was wondering on which node should i configure my fluentd to send data ? For sample test, i tried sending data from logstash to one of the nodes on 9200 but it doesn't seem to work.

Also, is the above configuration proper one ? I just need proper replication so if master goes down, the other machine can take it's place.

Any help would be appreciated.

So i was wondering on which node should i configure my fluentd to send data ?

Either one. It doesn't matter. If possible, configure the client (fluentd in this case) to know about all cluster nodes so that it can try to connect to all of them. Otherwise if you hardcode to a single hostname you have a problem if that node goes down. Alternatively, use HAproxy or similar as a frontend.

For sample test, i tried sending data from logstash to one of the nodes on 9200 but it doesn't seem to work.

Well, it should work but without further details it's impossible to tell. What's in the Logstash logs?

Also, is the above configuration proper one ? I just need proper replication so if master goes down, the other machine can take it's place.

You have one replica of each shard so you can handle a single node being down.

thanks alot magnumsbaeck.

I get following exception in my logstash.log

timestamp=>"2015-08-18T11:47:45.904000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>37, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)

This is my sample configuration across my 3 nodes :-

node.name: "pikachu"
cluster.name: pokemon
discovery.zen.ping.unicast.hosts: ["pichu"]
discovery.zen.ping.multicast.enabled: false

Do i have to select some master ?

What's your Logstash output configuration? Are you setting the cluster configuration option to "pokemon"?

The cluser.name is pokemon for all the 3 nodes

I tried running logstash in other 2 cluster nodes and i don't seem to get any errors in logstash.log , however when i open kibana, i can't find any data either.

This is my logstash configuration :-

input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}

output {
stdout { }
elasticsearch {
}
}

The cluser.name is pokemon for all the 3 nodes

Yes, but you're not configuring Logstash to connect to that cluster. Use the cluster option, and since you've disabled multicast you have to set the host to point to least one of the cluster nodes.

output {
  elasticsearch {
    cluster => "pokemon"
    host => ["es-host1.example.com"]
  }
}

Awesome. It's working now. I've also pointed my fluentd which is running on the client side to point to one of the IP's in the cluster and Elastic Search seems to be receiving logs.

I've also set : discovery.zen.minimum_master_nodes = 2

Is it recommended to use HA Proxy or i can maybe tell fluentd in someway of sending logs to 1 server in cluster and if it goes down ( not reachable ) it should send to the other server IP.