Cluster discovery on Amazon EC2 problem - need urgent help

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the Air
-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the Air
-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel the Air
-Walker] stopping ...
[2014-10-10 17:22:04,314][INFO ][node ] [Gabriel the Air
-Walker] stopped
[2014-10-10 17:22:04,314][INFO ][node ] [Gabriel the Air
-Walker] closing ...
[2014-10-10 17:22:04,320][INFO ][node ] [Gabriel the Air
-Walker] closed
[2014-10-10
17:22:06,170][INFO ][node ] [Giant-Man]
version[1.1.0], pid[3523], build[2181e11/2014-03-25T15:59:51Z]
[2014-10-10 17:22:06,171][INFO ][node ] [Giant-Man]
initializing ...
[2014-10-10
17:22:06,171][DEBUG][node ] [Giant-Man] using home
[/usr/share/elasticsearch], config [/etc/elasticsearch], data
[[/var/lib/elasticsearch]], logs [/var/log/elasticsearch], work
[/tmp/elasticsearch], plugins [/usr/share/elasticsearch/plugins]
[2014-10-10
17:22:06,212][DEBUG][plugins ] [Giant-Man]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:22:06,231][DEBUG][plugins ]
[Giant-Man] [/usr/share/elasticsearch/plugins/mapper-attachments/_site]
directory does not exist.
[2014-10-10
17:22:06,251][DEBUG][plugins ] [Giant-Man]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:22:06,251][DEBUG][plugins ]
[Giant-Man] [/usr/share/elasticsearch/plugins/mapper-attachments/_site]
directory does not exist.
[2014-10-10 17:22:06,251][INFO
][plugins ] [Giant-Man] loaded [mapper-attachments,
cloud-aws], sites [head, bigdesk]
[2014-10-10 17:22:06,268][DEBUG][common.compress.lzf ] using [
UnsafeChunkDecoder] decoder
[2014-10-10
17:22:06,280][DEBUG][env ] [Giant-Man] using node
location [[/var/lib/elasticsearch/elasticsearch/nodes/0]], local_node_id
[0]
[2014-10-10 17:22:07,992][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2014-10-10 17:22:08,001][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [index], type [fixed], size [2],
queue_size [200]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [bulk], type [fixed], size [2], queue_size [50]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [get], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [search], type [fixed], size [6], queue_size [1k]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [suggest], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,006][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [percolate], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,006][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [management], type [scaling], min [1], size [5], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [flush], type [scaling], min [1], size
[1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [merge], type [scaling], min [1], size [1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [refresh], type [scaling], min [1], size [1], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [warmer], type [scaling], min [1], size
[1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [optimize], type [fixed], size [1],
queue_size [null]
[2014-10-10
17:22:08,044][DEBUG][transport.netty ] [Giant-Man] using
worker_count[4], port[9300-9400], bind_host[null], publish_host[null],
compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1],
receive_predictor[512kb->512kb]
[2014-10-10
17:22:08,054][DEBUG][discovery.zen.ping.multicast] [Giant-Man] using
group [224.2.2.4], with port [54328], ttl [3], and address [null]
[2014-10-10 17:22:08,059][DEBUG][discovery.zen.ping.unicast] [Giant-Man]
using initial hosts [], with concurrent_connects [10]
[2014-10-10
17:22:08,060][DEBUG][discovery.zen ] [Giant-Man] using
ping.timeout [3s], master_election.filter_client [true],
master_election.filter_data [false]
[2014-10-10 17:22:08,061][DEBUG][discovery.zen.elect ] [Giant-Man]
using minimum_master_nodes [-1]
[2014-10-10
17:22:08,062][DEBUG][discovery.zen.fd ] [Giant-Man] [master]
uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2014-10-10
17:22:08,076][DEBUG][discovery.zen.fd ] [Giant-Man] [node ]
uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2014-10-10
17:22:08,109][DEBUG][monitor.jvm ] [Giant-Man] enabled
[true], last_gc_enabled [false], interval [1s], gc_threshold
[{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000,
debugThreshold=2000}, default=GcThreshold{name='default',
warnThreshold=10000, infoThreshold=5000, debugThreshold=2000},
young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700,
debugThreshold=400}}]
[2014-10-10
17:22:08,619][DEBUG][monitor.os ] [Giant-Man] Using probe
[org.elasticsearch.monitor.os.SigarOsProbe@4f7f3c0a] with
refresh_interval [1s]
[2014-10-10
17:22:08,626][DEBUG][monitor.process ] [Giant-Man] Using probe
[org.elasticsearch.monitor.process.SigarProcessProbe@4e4d2a69] with
refresh_interval [1s]
[2014-10-10 17:22:08,632][DEBUG][monitor.jvm ] [Giant-Man]
Using refresh_interval [1s]
[2014-10-10
17:22:08,632][DEBUG][monitor.network ] [Giant-Man] Using probe
[org.elasticsearch.monitor.network.SigarNetworkProbe@260e8c6f] with
refresh_interval [5s]
[2014-10-10 17:22:08,638][DEBUG][monitor.network ] [Giant-Man]
net_info
host [ip-10-185-210-54]
eth0 display_name [eth0]
address [/fe80:0:0:0:2000:bff:fe35:17ee%2] [/10.185.210.54]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true]
virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [65536] multicast [false] ptp [false] loopback [true] up [true]
virtual [false]

[2014-10-10
17:22:08,642][DEBUG][monitor.fs ] [Giant-Man] Using probe
[org.elasticsearch.monitor.fs.SigarFsProbe@1184fef1] with
refresh_interval [1s]
[2014-10-10
17:22:08,976][DEBUG][indices.store ] [Giant-Man] using
indices.store.throttle.type [MERGE], with
index.store.throttle.max_bytes_per_sec [20mb]
[2014-10-10 17:22:08,996][DEBUG][script ] [Giant-Man]
using script cache with max_size [500], expire [null]
[2014-10-10
17:22:09,002][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,003][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,003][DEBUG][cluster.routing.allocation.decider]
[Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10 17:22:09,008][DEBUG][gateway.local ] [Giant-Man]
using initial_shards [quorum], list_timeout [30s]
[2014-10-10
17:22:09,030][DEBUG][indices.recovery ] [Giant-Man] using
max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size
[512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2014-10-10
17:22:09,197][DEBUG][http.netty ] [Giant-Man] using
max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb],
max_content_length[100mb], receive_predictor[512kb->512kb]
[2014-10-10
17:22:09,206][DEBUG][indices.memory ] [Giant-Man] using
index_buffer_size [100.7mb], with min_shard_index_buffer_size [4mb],
max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2014-10-10
17:22:09,208][DEBUG][indices.cache.filter ] [Giant-Man] using
[node] weighted filter cache with size [20%], actual_size [201.4mb],
expire [null], clean_interval [1m]
[2014-10-10 17:22:09,209][DEBUG][indices.fielddata.cache ] [Giant-Man]
using size [-1] [-1b], expire [null]
[2014-10-10
17:22:09,232][DEBUG][gateway.local.state.meta ] [Giant-Man] using
gateway.local.auto_import_dangled [YES], with
gateway.local.dangling_timeout [2h]
[2014-10-10 17:22:09,245][DEBUG][gateway.local.state.meta ] [Giant-Man]
took 13ms to load state
[2014-10-10 17:22:09,246][DEBUG][gateway.local.state.shards] [Giant-Man]
took 0s to load started shards state
[2014-10-10
17:22:09,249][DEBUG][bulk.udp ] [Giant-Man] using
enabled [false], host [null], port [9700-9800], bulk_actions [1000],
bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2014-10-10
17:22:09,253][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,255][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,256][DEBUG][cluster.routing.allocation.decider]
[Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10
17:22:09,257][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,257][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,258][DEBUG][cluster.routing.allocation.decider]
[Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10 17:22:09,270][INFO ][node ] [Giant-Man]
initialized
[2014-10-10 17:22:09,271][INFO ][node ] [Giant-Man]
starting ...
[2014-10-10 17:22:09,290][DEBUG][netty.channel.socket.nio.SelectorUtil]
Using select timeout of 500
[2014-10-10 17:22:09,290][DEBUG][netty.channel.socket.nio.SelectorUtil]
Epoll-bug workaround enabled = false
[2014-10-10 17:22:09,429][DEBUG][transport.netty ] [Giant-Man]
Bound to address [/0:0:0:0:0:0:0:0:9300]
[2014-10-10
17:22:09,432][INFO ][transport ] [Giant-Man]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.185.210.54:9300]}
[2014-10-10
17:22:09,454][TRACE][discovery ] [Giant-Man] waiting for
30s for the initial state to be set by the discovery
[2014-10-10 17:22:09,460][TRACE][discovery.zen.ping.multicast] [Giant-Man]
[1] sending ping request
[2014-10-10 17:22:10,961][TRACE][discovery.zen.ping.multicast] [Giant-Man]
[1] sending ping request
[2014-10-10 17:22:12,464][TRACE][discovery.zen ] [Giant-Man]
full ping responses: {none}
[2014-10-10
17:22:12,464][DEBUG][discovery.zen ] [Giant-Man] filtered
ping responses: (filter_client[true], filter_data[false]) {none}
[2014-10-10 17:22:12,468][DEBUG][cluster.service ] [Giant-Man]
processing [zen-disco-join (elected_as_master)]: execute
[2014-10-10
17:22:12,469][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [1], source [zen-disco-join (elected_as_master)]
[2014-10-10
17:22:12,470][INFO ][cluster.service ] [Giant-Man] new_master
[Giant-Man][jBBOiyjGS_aGNhomzdcvoQ][ip-10-185-210-54][inet[/10.185.210.54:9300]],
reason: zen-disco-join (elected_as_master)
[2014-10-10
17:22:12,503][DEBUG][transport.netty ] [Giant-Man] connected to
node
[[Giant-Man][jBBOiyjGS_aGNhomzdcvoQ][ip-10-185-210-54][inet[/10.185.210.54:9300]]]
[2014-10-10 17:22:12,503][DEBUG][cluster.service ] [Giant-Man]
publishing cluster state version 1
[2014-10-10 17:22:12,504][DEBUG][cluster.service ] [Giant-Man] set
local cluster state to version 1
[2014-10-10 17:22:12,506][DEBUG][river.cluster ] [Giant-Man]
processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,506][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,506][TRACE][discovery ] [Giant-Man]
initial state set from discovery
[2014-10-10
17:22:12,506][DEBUG][cluster.service ] [Giant-Man] processing
[zen-disco-join (elected_as_master)]: done applying updated
cluster_state (version: 1)
[2014-10-10 17:22:12,507][INFO ][discovery ] [Giant-Man]
elasticsearch/jBBOiyjGS_aGNhomzdcvoQ
[2014-10-10 17:22:12,520][DEBUG][cluster.service ] [Giant-Man]
processing [local-gateway-elected-state]: execute
[2014-10-10
17:22:12,531][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [2], source [local-gateway-elected-state]
[2014-10-10 17:22:12,531][DEBUG][cluster.service ] [Giant-Man]
publishing cluster state version 2
[2014-10-10 17:22:12,532][DEBUG][cluster.service ] [Giant-Man] set
local cluster state to version 2
[2014-10-10
17:22:12,535][INFO ][http ] [Giant-Man]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/10.185.210.54:9200]}
[2014-10-10 17:22:12,537][DEBUG][river.cluster ] [Giant-Man]
processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,537][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,546][INFO ][gateway ] [Giant-Man]
recovered [0] indices into cluster_state
[2014-10-10
17:22:12,546][DEBUG][cluster.service ] [Giant-Man] processing
[local-gateway-elected-state]: done applying updated cluster_state
(version: 2)
[2014-10-10 17:22:12,546][DEBUG][cluster.service ] [Giant-Man]
processing [updating local node id]: execute
[2014-10-10
17:22:12,546][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [3], source [updating local node id]
[2014-10-10 17:22:12,547][DEBUG][cluster.service ] [Giant-Man]
publishing cluster state version 3
[2014-10-10 17:22:12,547][DEBUG][cluster.service ] [Giant-Man] set
local cluster state to version 3
[2014-10-10 17:22:12,547][DEBUG][river.cluster ] [Giant-Man]
processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,547][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,547][INFO ][node ] [Giant-Man]
started
[2014-10-10
17:22:12,548][DEBUG][cluster.service ] [Giant-Man] processing
[updating local node id]: done applying updated cluster_state (version:
3)
[2014-10-10 17:22:22,505][DEBUG][cluster.service ] [Giant-Man]
processing [routing-table-updater]: execute
[2014-10-10
17:22:22,505][DEBUG][cluster.service ] [Giant-Man] processing
[routing-table-updater]: no change in cluster_state

#######################################Cluster node info
############################################################
{

name: Hammer and Anvil
transport_address: inet[/10.101.176.236:9300]
host: ip-10-101-176-236
ip: 10.101.176.236
version: 1.1.0
build: 2181e11
http_address: inet[/10.101.176.236:9200]
settings: {
    path: {
        data: /var/lib/elasticsearch
        work: /tmp/elasticsearch
        home: /usr/share/elasticsearch
        conf: /etc/elasticsearch
        logs: /var/log/elasticsearch
    }
    pidfile: /var/run/elasticsearch.pid
    cluster: {
        name: elasticsearch
    }
    config: /etc/elasticsearch/elasticsearch.yml
    name: Hammer and Anvil
}
os: {
    refresh_interval: 1000
    available_processors: 2
    cpu: {
        vendor: Intel
        model: Xeon
        mhz: 2500
        total_cores: 2
        total_sockets: 2
        cores_per_socket: 32
        cache_size_in_bytes: 25600
    }
    mem: {
        total_in_bytes: 7878475776
    }
    swap: {
        total_in_bytes: 0
    }
}
process: {
    refresh_interval: 1000
    id: 3109
    max_file_descriptors: 65535
    mlockall: false
}
jvm: {
    pid: 3109
    version: 1.7.0_55
    vm_name: OpenJDK 64-Bit Server VM
    vm_version: 24.51-b03
    vm_vendor: Oracle Corporation
    start_time: 1412961257768
    mem: {
        heap_init_in_bytes: 268435456
        heap_max_in_bytes: 1056309248
        non_heap_init_in_bytes: 24313856
        non_heap_max_in_bytes: 224395264
        direct_max_in_bytes: 1056309248
    }
    gc_collectors: [
        ParNew
        ConcurrentMarkSweep
    ]
    memory_pools: [
        Code Cache
        Par Eden Space
        Par Survivor Space
        CMS Old Gen
        CMS Perm Gen
    ]
}
thread_pool: {
    generic: {
        type: cached
        keep_alive: 30s
    }
    index: {
        type: fixed
        min: 2
        max: 2
        queue_size: 200
    }
    get: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    snapshot: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    merge: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    suggest: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    bulk: {
        type: fixed
        min: 2
        max: 2
        queue_size: 50
    }
    optimize: {
        type: fixed
        min: 1
        max: 1
    }
    warmer: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    flush: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    search: {
        type: fixed
        min: 6
        max: 6
        queue_size: 1k
    }
    percolate: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    management: {
        type: scaling
        min: 1
        max: 5
        keep_alive: 5m
    }
    refresh: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
}
network: {
    refresh_interval: 5000
    primary_interface: {
        address: 10.101.176.236
        name: eth0
        mac_address: 22:00:0B:36:04:47
    }
}
transport: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9300]
    publish_address: inet[/10.101.176.236:9300]
}
http: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9200]
    publish_address: inet[/10.101.176.236:9200]
    max_content_length_in_bytes: 104857600
}
plugins: [
    {
        name: cloud-aws
        version: 2.1.0
        description: Cloud AWS Plugin
        jvm: true
        site: false
    }
    {
        name: mapper-attachments
        version: 2.0.0
        description: Adds the attachment type allowing to parse 

difference attachment formats
jvm: true
site: false
}
{
name: head
version: NA
description: No description found.
url: /_plugin/head/
jvm: false
site: true
}
{
name: bigdesk
version: NA
description: No description found.
url: /_plugin/bigdesk/
jvm: false
site: true
}
]

}

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a295e5db-186d-4eb8-a42b-c48caf6c4770%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I might be wrong but may be you should add a space after each ":" char in yml file.

It sounds like multicast is not disabled and that ec2 discovery is not used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran.jeremic@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2 instances as I have to launch an application within a week. I'm trying this for the last three days without success. I tried to follow many instructions, created instances all over again and still nothing. I can telnet instances on 9300. I added security group ES2 having a port range 0-65535 and also individual instances by private IP addresses with range 9200-9400. Nodes can't discover each other,and it seems that both nodes are created on their own regardless the fact that cluster node info indicates that good elasticsearch.yml is used. For example, cluster name is the one I added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel the Air-Walker] stopping ...
[2014-10-10 17:22:04,314][INFO ][node ] [Gabriel the Air-Walker] stopped
[2014-10-10 17:22:04,314][INFO ][node ] [Gabriel the Air-Walker] closing ...
[2014-10-10 17:22:04,320][INFO ][node ] [Gabriel the Air-Walker] closed
[2014-10-10
17:22:06,170][INFO ][node ] [Giant-Man]
version[1.1.0], pid[3523], build[2181e11/2014-03-25T15:59:51Z]
[2014-10-10 17:22:06,171][INFO ][node ] [Giant-Man] initializing ...
[2014-10-10
17:22:06,171][DEBUG][node ] [Giant-Man] using home
[/usr/share/elasticsearch], config [/etc/elasticsearch], data
[[/var/lib/elasticsearch]], logs [/var/log/elasticsearch], work
[/tmp/elasticsearch], plugins [/usr/share/elasticsearch/plugins]
[2014-10-10
17:22:06,212][DEBUG][plugins ] [Giant-Man]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:22:06,231][DEBUG][plugins ]
[Giant-Man] [/usr/share/elasticsearch/plugins/mapper-attachments/_site]
directory does not exist.
[2014-10-10
17:22:06,251][DEBUG][plugins ] [Giant-Man]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:22:06,251][DEBUG][plugins ]
[Giant-Man] [/usr/share/elasticsearch/plugins/mapper-attachments/_site]
directory does not exist.
[2014-10-10 17:22:06,251][INFO
][plugins ] [Giant-Man] loaded [mapper-attachments,
cloud-aws], sites [head, bigdesk]
[2014-10-10 17:22:06,268][DEBUG][common.compress.lzf ] using [UnsafeChunkDecoder] decoder
[2014-10-10
17:22:06,280][DEBUG][env ] [Giant-Man] using node
location [[/var/lib/elasticsearch/elasticsearch/nodes/0]], local_node_id
[0]
[2014-10-10 17:22:07,992][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [generic], type [cached], keep_alive
[30s]
[2014-10-10 17:22:08,001][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [index], type [fixed], size [2],
queue_size [200]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [bulk], type [fixed], size [2], queue_size [50]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [get], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [search], type [fixed], size [6], queue_size [1k]
[2014-10-10
17:22:08,005][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [suggest], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,006][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [percolate], type [fixed], size [2], queue_size [1k]
[2014-10-10
17:22:08,006][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [management], type [scaling], min [1], size [5], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [flush], type [scaling], min [1], size
[1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [merge], type [scaling], min [1], size [1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [refresh], type [scaling], min [1], size [1], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [warmer], type [scaling], min [1], size
[1], keep_alive [5m]
[2014-10-10
17:22:08,007][DEBUG][threadpool ] [Giant-Man] creating
thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive
[5m]
[2014-10-10 17:22:08,007][DEBUG][threadpool ]
[Giant-Man] creating thread_pool [optimize], type [fixed], size [1],
queue_size [null]
[2014-10-10
17:22:08,044][DEBUG][transport.netty ] [Giant-Man] using
worker_count[4], port[9300-9400], bind_host[null], publish_host[null],
compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1],
receive_predictor[512kb->512kb]
[2014-10-10
17:22:08,054][DEBUG][discovery.zen.ping.multicast] [Giant-Man] using
group [224.2.2.4], with port [54328], ttl [3], and address [null]
[2014-10-10 17:22:08,059][DEBUG][discovery.zen.ping.unicast] [Giant-Man] using initial hosts , with concurrent_connects [10]
[2014-10-10
17:22:08,060][DEBUG][discovery.zen ] [Giant-Man] using
ping.timeout [3s], master_election.filter_client [true],
master_election.filter_data [false]
[2014-10-10 17:22:08,061][DEBUG][discovery.zen.elect ] [Giant-Man] using minimum_master_nodes [-1]
[2014-10-10
17:22:08,062][DEBUG][discovery.zen.fd ] [Giant-Man] [master]
uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2014-10-10
17:22:08,076][DEBUG][discovery.zen.fd ] [Giant-Man] [node ]
uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2014-10-10
17:22:08,109][DEBUG][monitor.jvm ] [Giant-Man] enabled
[true], last_gc_enabled [false], interval [1s], gc_threshold
[{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000,
debugThreshold=2000}, default=GcThreshold{name='default',
warnThreshold=10000, infoThreshold=5000, debugThreshold=2000},
young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700,
debugThreshold=400}}]
[2014-10-10
17:22:08,619][DEBUG][monitor.os ] [Giant-Man] Using probe
[org.elasticsearch.monitor.os.SigarOsProbe@4f7f3c0a] with
refresh_interval [1s]
[2014-10-10
17:22:08,626][DEBUG][monitor.process ] [Giant-Man] Using probe
[org.elasticsearch.monitor.process.SigarProcessProbe@4e4d2a69] with
refresh_interval [1s]
[2014-10-10 17:22:08,632][DEBUG][monitor.jvm ] [Giant-Man] Using refresh_interval [1s]
[2014-10-10
17:22:08,632][DEBUG][monitor.network ] [Giant-Man] Using probe
[org.elasticsearch.monitor.network.SigarNetworkProbe@260e8c6f] with
refresh_interval [5s]
[2014-10-10 17:22:08,638][DEBUG][monitor.network ] [Giant-Man] net_info
host [ip-10-185-210-54]
eth0 display_name [eth0]
address [/fe80:0:0:0:2000:bff:fe35:17ee%2] [/10.185.210.54]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [65536] multicast [false] ptp [false] loopback [true] up [true] virtual [false]

[2014-10-10
17:22:08,642][DEBUG][monitor.fs ] [Giant-Man] Using probe
[org.elasticsearch.monitor.fs.SigarFsProbe@1184fef1] with
refresh_interval [1s]
[2014-10-10
17:22:08,976][DEBUG][indices.store ] [Giant-Man] using
indices.store.throttle.type [MERGE], with
index.store.throttle.max_bytes_per_sec [20mb]
[2014-10-10 17:22:08,996][DEBUG][script ] [Giant-Man] using script cache with max_size [500], expire [null]
[2014-10-10
17:22:09,002][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,003][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,003][DEBUG][cluster.routing.allocation.decider] [Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10 17:22:09,008][DEBUG][gateway.local ] [Giant-Man] using initial_shards [quorum], list_timeout [30s]
[2014-10-10
17:22:09,030][DEBUG][indices.recovery ] [Giant-Man] using
max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size
[512kb], translog_size [512kb], translog_ops [1000], and compress [true]
[2014-10-10
17:22:09,197][DEBUG][http.netty ] [Giant-Man] using
max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb],
max_content_length[100mb], receive_predictor[512kb->512kb]
[2014-10-10
17:22:09,206][DEBUG][indices.memory ] [Giant-Man] using
index_buffer_size [100.7mb], with min_shard_index_buffer_size [4mb],
max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
[2014-10-10
17:22:09,208][DEBUG][indices.cache.filter ] [Giant-Man] using
[node] weighted filter cache with size [20%], actual_size [201.4mb],
expire [null], clean_interval [1m]
[2014-10-10 17:22:09,209][DEBUG][indices.fielddata.cache ] [Giant-Man] using size [-1] [-1b], expire [null]
[2014-10-10
17:22:09,232][DEBUG][gateway.local.state.meta ] [Giant-Man] using
gateway.local.auto_import_dangled [YES], with
gateway.local.dangling_timeout [2h]
[2014-10-10 17:22:09,245][DEBUG][gateway.local.state.meta ] [Giant-Man] took 13ms to load state
[2014-10-10 17:22:09,246][DEBUG][gateway.local.state.shards] [Giant-Man] took 0s to load started shards state
[2014-10-10
17:22:09,249][DEBUG][bulk.udp ] [Giant-Man] using
enabled [false], host [null], port [9700-9800], bulk_actions [1000],
bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
[2014-10-10
17:22:09,253][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,255][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,256][DEBUG][cluster.routing.allocation.decider] [Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10
17:22:09,257][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using node_concurrent_recoveries [2], node_initial_primaries_recoveries
[4]
[2014-10-10
17:22:09,257][DEBUG][cluster.routing.allocation.decider] [Giant-Man]
using [cluster.routing.allocation.allow_rebalance] with
[indices_all_active]
[2014-10-10 17:22:09,258][DEBUG][cluster.routing.allocation.decider] [Giant-Man] using [cluster_concurrent_rebalance] with [2]
[2014-10-10 17:22:09,270][INFO ][node ] [Giant-Man] initialized
[2014-10-10 17:22:09,271][INFO ][node ] [Giant-Man] starting ...
[2014-10-10 17:22:09,290][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select timeout of 500
[2014-10-10 17:22:09,290][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug workaround enabled = false
[2014-10-10 17:22:09,429][DEBUG][transport.netty ] [Giant-Man] Bound to address [/0:0:0:0:0:0:0:0:9300]
[2014-10-10
17:22:09,432][INFO ][transport ] [Giant-Man]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.185.210.54:9300]}
[2014-10-10
17:22:09,454][TRACE][discovery ] [Giant-Man] waiting for
30s for the initial state to be set by the discovery
[2014-10-10 17:22:09,460][TRACE][discovery.zen.ping.multicast] [Giant-Man] [1] sending ping request
[2014-10-10 17:22:10,961][TRACE][discovery.zen.ping.multicast] [Giant-Man] [1] sending ping request
[2014-10-10 17:22:12,464][TRACE][discovery.zen ] [Giant-Man] full ping responses: {none}
[2014-10-10
17:22:12,464][DEBUG][discovery.zen ] [Giant-Man] filtered
ping responses: (filter_client[true], filter_data[false]) {none}
[2014-10-10 17:22:12,468][DEBUG][cluster.service ] [Giant-Man] processing [zen-disco-join (elected_as_master)]: execute
[2014-10-10
17:22:12,469][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [1], source [zen-disco-join (elected_as_master)]
[2014-10-10
17:22:12,470][INFO ][cluster.service ] [Giant-Man] new_master
[Giant-Man][jBBOiyjGS_aGNhomzdcvoQ][ip-10-185-210-54][inet[/10.185.210.54:9300]],
reason: zen-disco-join (elected_as_master)
[2014-10-10
17:22:12,503][DEBUG][transport.netty ] [Giant-Man] connected to
node
[[Giant-Man][jBBOiyjGS_aGNhomzdcvoQ][ip-10-185-210-54][inet[/10.185.210.54:9300]]]
[2014-10-10 17:22:12,503][DEBUG][cluster.service ] [Giant-Man] publishing cluster state version 1
[2014-10-10 17:22:12,504][DEBUG][cluster.service ] [Giant-Man] set local cluster state to version 1
[2014-10-10 17:22:12,506][DEBUG][river.cluster ] [Giant-Man] processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,506][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,506][TRACE][discovery ] [Giant-Man] initial state set from discovery
[2014-10-10
17:22:12,506][DEBUG][cluster.service ] [Giant-Man] processing
[zen-disco-join (elected_as_master)]: done applying updated
cluster_state (version: 1)
[2014-10-10 17:22:12,507][INFO ][discovery ] [Giant-Man] elasticsearch/jBBOiyjGS_aGNhomzdcvoQ
[2014-10-10 17:22:12,520][DEBUG][cluster.service ] [Giant-Man] processing [local-gateway-elected-state]: execute
[2014-10-10
17:22:12,531][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [2], source [local-gateway-elected-state]
[2014-10-10 17:22:12,531][DEBUG][cluster.service ] [Giant-Man] publishing cluster state version 2
[2014-10-10 17:22:12,532][DEBUG][cluster.service ] [Giant-Man] set local cluster state to version 2
[2014-10-10
17:22:12,535][INFO ][http ] [Giant-Man]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/10.185.210.54:9200]}
[2014-10-10 17:22:12,537][DEBUG][river.cluster ] [Giant-Man] processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,537][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,546][INFO ][gateway ] [Giant-Man] recovered [0] indices into cluster_state
[2014-10-10
17:22:12,546][DEBUG][cluster.service ] [Giant-Man] processing
[local-gateway-elected-state]: done applying updated cluster_state
(version: 2)
[2014-10-10 17:22:12,546][DEBUG][cluster.service ] [Giant-Man] processing [updating local node id]: execute
[2014-10-10
17:22:12,546][DEBUG][cluster.service ] [Giant-Man] cluster
state updated, version [3], source [updating local node id]
[2014-10-10 17:22:12,547][DEBUG][cluster.service ] [Giant-Man] publishing cluster state version 3
[2014-10-10 17:22:12,547][DEBUG][cluster.service ] [Giant-Man] set local cluster state to version 3
[2014-10-10 17:22:12,547][DEBUG][river.cluster ] [Giant-Man] processing [reroute_rivers_node_changed]: execute
[2014-10-10
17:22:12,547][DEBUG][river.cluster ] [Giant-Man] processing
[reroute_rivers_node_changed]: no change in cluster_state
[2014-10-10 17:22:12,547][INFO ][node ] [Giant-Man] started
[2014-10-10
17:22:12,548][DEBUG][cluster.service ] [Giant-Man] processing
[updating local node id]: done applying updated cluster_state (version:
3)
[2014-10-10 17:22:22,505][DEBUG][cluster.service ] [Giant-Man] processing [routing-table-updater]: execute
[2014-10-10
17:22:22,505][DEBUG][cluster.service ] [Giant-Man] processing
[routing-table-updater]: no change in cluster_state

#######################################Cluster node info
############################################################
{

name: Hammer and Anvil
transport_address: inet[/10.101.176.236:9300]
host: ip-10-101-176-236
ip: 10.101.176.236
version: 1.1.0
build: 2181e11
http_address: inet[/10.101.176.236:9200]
settings: {
    path: {
        data: /var/lib/elasticsearch
        work: /tmp/elasticsearch
        home: /usr/share/elasticsearch
        conf: /etc/elasticsearch
        logs: /var/log/elasticsearch
    }
    pidfile: /var/run/elasticsearch.pid
    cluster: {
        name: elasticsearch
    }
    config: /etc/elasticsearch/elasticsearch.yml
    name: Hammer and Anvil
}
os: {
    refresh_interval: 1000
    available_processors: 2
    cpu: {
        vendor: Intel
        model: Xeon
        mhz: 2500
        total_cores: 2
        total_sockets: 2
        cores_per_socket: 32
        cache_size_in_bytes: 25600
    }
    mem: {
        total_in_bytes: 7878475776
    }
    swap: {
        total_in_bytes: 0
    }
}
process: {
    refresh_interval: 1000
    id: 3109
    max_file_descriptors: 65535
    mlockall: false
}
jvm: {
    pid: 3109
    version: 1.7.0_55
    vm_name: OpenJDK 64-Bit Server VM
    vm_version: 24.51-b03
    vm_vendor: Oracle Corporation
    start_time: 1412961257768
    mem: {
        heap_init_in_bytes: 268435456
        heap_max_in_bytes: 1056309248
        non_heap_init_in_bytes: 24313856
        non_heap_max_in_bytes: 224395264
        direct_max_in_bytes: 1056309248
    }
    gc_collectors: [
        ParNew
        ConcurrentMarkSweep
    ]
    memory_pools: [
        Code Cache
        Par Eden Space
        Par Survivor Space
        CMS Old Gen
        CMS Perm Gen
    ]
}
thread_pool: {
    generic: {
        type: cached
        keep_alive: 30s
    }
    index: {
        type: fixed
        min: 2
        max: 2
        queue_size: 200
    }
    get: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    snapshot: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    merge: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    suggest: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    bulk: {
        type: fixed
        min: 2
        max: 2
        queue_size: 50
    }
    optimize: {
        type: fixed
        min: 1
        max: 1
    }
    warmer: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    flush: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
    search: {
        type: fixed
        min: 6
        max: 6
        queue_size: 1k
    }
    percolate: {
        type: fixed
        min: 2
        max: 2
        queue_size: 1k
    }
    management: {
        type: scaling
        min: 1
        max: 5
        keep_alive: 5m
    }
    refresh: {
        type: scaling
        min: 1
        max: 1
        keep_alive: 5m
    }
}
network: {
    refresh_interval: 5000
    primary_interface: {
        address: 10.101.176.236
        name: eth0
        mac_address: 22:00:0B:36:04:47
    }
}
transport: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9300]
    publish_address: inet[/10.101.176.236:9300]
}
http: {
    bound_address: inet[/0:0:0:0:0:0:0:0:9200]
    publish_address: inet[/10.101.176.236:9200]
    max_content_length_in_bytes: 104857600
}
plugins: [
    {
        name: cloud-aws
        version: 2.1.0
        description: Cloud AWS Plugin
        jvm: true
        site: false
    }
    {
        name: mapper-attachments
        version: 2.0.0
        description: Adds the attachment type allowing to parse difference attachment formats
        jvm: true
        site: false
    }
    {
        name: head
        version: NA
        description: No description found.
        url: /_plugin/head/
        jvm: false
        site: true
    }
    {
        name: bigdesk
        version: NA
        description: No description found.
        url: /_plugin/bigdesk/
        jvm: false
        site: true
    }
]

}

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a295e5db-186d-4eb8-a42b-c48caf6c4770%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7D56F623-55F6-4EE3-BC69-942931895B72%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x5541474b]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at
org.elasticsearch.common.netty.channel.Channels.connect(Channels.java:634)
at
org.elasticsearch.common.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at
org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:705)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
at
org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:404)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x9e80cd79]], closing connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char in
yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic <zoran....@gmail.com <javascript:>>
a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the
Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel the
Air-Walker] stopping ...
[<span style="color: #066;" class=

...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic zoran.jeremic@gmail.com a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space after ":". It was really something that made a problem, so I'm now a step forward. It seems that it's trying to establish the connection, but there are a plenty of exceptions stating that Nework is unreachable. Why this exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey Bailey] exception caught on transport layer [[id: 0x5541474b]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.connect(Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:404)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey Bailey] exception caught on transport layer [[id: 0x9e80cd79]], closing connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:
I might be wrong but may be you should add a space after each ":" char in yml file.

It sounds like multicast is not disabled and that ec2 discovery is not used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2 instances as I have to launch an application within a week. I'm trying this for the last three days without success. I tried to follow many instructions, created instances all over again and still nothing. I can telnet instances on 9300. I added security group ES2 having a port range 0-65535 and also individual instances by private IP addresses with range 9200-9400. Nodes can't discover each other,and it seems that both nodes are created on their own regardless the fact that cluster node info indicates that good elasticsearch.yml is used. For example, cluster name is the one I added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel the Air-Walker] stopping ...
[<span style="color: #066;" class=
...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/FF37B5D8-7889-4EFD-9BA2-86ACA65EF5FA%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Hi David,

Thank you for your advices. It really helped me to solve the issue and make
it works.
At the end I had to leave these two:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]

and to remove:
network.publish_host: 255.255.255.255

And it got work finally. What turned to be the biggest problem is what you
mentioned at the beginning, missing spaces after ":", missing spaces at the
beginning of line and some extra spaces after #. I thought that : is
delimiter, and it doesn't have to be followed by space. Strange thing is
that if I have such problems in elasticsearch.yml, there is no logs that
indicates that there is some problem. It doesn't log anything and can't
start elasticsearch, or just ignore wrong properties.

Thanks,
Zoran

On Friday, 10 October 2014 14:11:00 UTC-7, David Pilato wrote:

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic <zoran....@gmail.com <javascript:>>
a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x5541474b]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
at
org.elasticsearch.common.netty.channel.Channels.connect(Channels.java:634)
at
org.elasticsearch.common.netty.channel.AbstractChannel.connect(AbstractChannel.java:207)
at
org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229)
at
org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182)
at
org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:705)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
at
org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
at
org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
at
org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:404)
at
org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x9e80cd79]], closing connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char in
yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the
Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel the
Air-Walker] stopping ...
[<span style="color: #066;" class=

...

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zoran, good to hear it is working now.

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)

  • make sure u have the ec2 plugin installed.

  • if you use iam profiles, you don't need a key specified in the config
    (this will override the key from the Profile). Also make sure you
    manually test your profile is applied properly ( AWS CLI is a good
    agnostic tool for this).

  • reduce the zen discovery timeout - it seems that it will always start w
    zen then failover to ec2 and it can take 30secs or so to timeout... ( maybe
    it was my bad config, I used to have zen when I was moving from unicast to
    ec2 disco ...I don't remember finding an option to disabling zen disco).

  • the default logs should show you enough info to debug any of this.

  • your master config shows master false... You want the master with master
    =true and data = false... Obviously you want more than one master ( if you
    don't have too much load start with all nodes available as data and master,
    then separate functionality as needed). Don't forget to set the minimum
    expected # nodes to n-master/2+1 to prevent split brain scenarios.
    On 11/10/2014 1:38 pm, "Zoran Jeremic" zoran.jeremic@gmail.com wrote:

Hi David,

Thank you for your advices. It really helped me to solve the issue and
make it works.
At the end I had to leave these two:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]

and to remove:
network.publish_host: 255.255.255.255

And it got work finally. What turned to be the biggest problem is what you
mentioned at the beginning, missing spaces after ":", missing spaces at the
beginning of line and some extra spaces after #. I thought that : is
delimiter, and it doesn't have to be followed by space. Strange thing is
that if I have such problems in elasticsearch.yml, there is no logs that
indicates that there is some problem. It doesn't log anything and can't
start elasticsearch, or just ignore wrong properties.

Thanks,
Zoran

On Friday, 10 October 2014 14:11:00 UTC-7, David Pilato wrote:

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic zoran....@gmail.com a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x5541474b]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.
NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.
NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.
java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.
connect(Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.
connect(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.
connectToChannels(NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.
connectToNode(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.
connectToNode(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(
TransportService.java:129)
at org.elasticsearch.cluster.service.InternalClusterService$
UpdateTask.run(InternalClusterService.java:404)
at org.elasticsearch.common.util.concurrent.
PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(
PrioritizedEsThreadPoolExecutor.java:134)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x9e80cd79]], closing connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char
in yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel
the Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel
the Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel
the Air-Walker] stopping ...
[<span style="color: #066;" class=

...

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CACj2-4Jz4ukFDXhaZh86J6ga4QuqWdum1HMS3v0RV0-sbxRtFg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

As you know that the mobile phone signal jammer can cut off the signals of the mobile phones and soon make it impossible to make phone calls or send messages. In this way when you need the peaceful condition and want to stay in it, you can just use the best mobile phone jammer to help you achieve your goal. And now as the technology develops with high speed the advanced Cellphone Signal Jammer has come into the market and are well welcomed by the group of people who need the jammer product.

Hi Norberto,

Thank you for your advices. This is really helpful, since I have never used
elasticsearch in the cluster before, and never had went live with a number
of users. My previous experience was on ES single node and very small
number of users, so I'm still concern how this will work. The main problem
is that I don't know how many users I could expect, so I should be ready to
expand the cluster if it's necessary.

So far, I created a cluster of 3 m3.large instances having 3 indexes (5
shards and 2 replicas).
I couldn't manage to connect it with ec2 autodiscovery. The only option
that worked for me is having one node that will be referred from other
nodes as unicast host. I think it might work if I have one node that will
always been on.

You were right about having a keys in config. I didn't need it. Can I also
remove this from my java application? I guess it could be removed if launch
configuration contains IAM instance profile.
I also decreased zen discovery timeout to 3s.

  • your master config shows master false... You want the master with master
    =true and data = false... Obviously you want more than one master ( if you
    don't have too much load start with all nodes available as data and master,
    then separate functionality as needed). Don't forget to set the minimum
    expected # nodes to n-master/2+1 to prevent split brain scenarios.
    I've set all 3 nodes as master and data, but I'm not sure that I understand
    what is the advantage of having nodes that are not master nodes. I know
    these nodes will not be elected as master, but what is the idea for that,
    and what would I get if I set master not to have data on it? Would it
    increase performance?

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)
How will ES node behave in Amazon auto-scale and could it be used like I'm
using auto scaling to meet high load? If I already have set 5 shards and 2
replicas on previous 3 nodes, will these shards and replicas be moved to
new nodes, and how long it might take for this? If this is what is going
on, I guess it's not good idea to auto-scale new ES node when I have a high
intensity of ES use, and then to turn it off later.
Sorry if these questions are too naive.

Thanks,
Zoran

On Friday, 10 October 2014 20:43:02 UTC-7, Norberto Meijome wrote:

Zoran, good to hear it is working now.

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)

  • make sure u have the ec2 plugin installed.

  • if you use iam profiles, you don't need a key specified in the config
    (this will override the key from the Profile). Also make sure you
    manually test your profile is applied properly ( AWS CLI is a good
    agnostic tool for this).

  • reduce the zen discovery timeout - it seems that it will always start w
    zen then failover to ec2 and it can take 30secs or so to timeout... ( maybe
    it was my bad config, I used to have zen when I was moving from unicast to
    ec2 disco ...I don't remember finding an option to disabling zen disco).

  • the default logs should show you enough info to debug any of this.

  • your master config shows master false... You want the master with master
    =true and data = false... Obviously you want more than one master ( if you
    don't have too much load start with all nodes available as data and master,
    then separate functionality as needed). Don't forget to set the minimum
    expected # nodes to n-master/2+1 to prevent split brain scenarios.
    On 11/10/2014 1:38 pm, "Zoran Jeremic" <zoran....@gmail.com <javascript:>>
    wrote:

Hi David,

Thank you for your advices. It really helped me to solve the issue and
make it works.
At the end I had to leave these two:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts:
["10.185.210.54[9300-9400]","10.101.176.236[9300-9400]"]

and to remove:
network.publish_host: 255.255.255.255

And it got work finally. What turned to be the biggest problem is what
you mentioned at the beginning, missing spaces after ":", missing spaces at
the beginning of line and some extra spaces after #. I thought that : is
delimiter, and it doesn't have to be followed by space. Strange thing is
that if I have such problems in elasticsearch.yml, there is no logs that
indicates that there is some problem. It doesn't log anything and can't
start elasticsearch, or just ignore wrong properties.

Thanks,
Zoran

On Friday, 10 October 2014 14:11:00 UTC-7, David Pilato wrote:

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic zoran....@gmail.com a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey
Bailey] exception caught on transport layer [[id: 0x5541474b]], closing
connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.
NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.
java:108)
at org.elasticsearch.common.netty.channel.socket.nio.
NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.
java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.
connect(Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.
connect(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.
connectToChannels(NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.
connectToNode(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.
connectToNode(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(
TransportService.java:129)
at org.elasticsearch.cluster.service.InternalClusterService$
UpdateTask.run(InternalClusterService.java:404)
at org.elasticsearch.common.util.concurrent.
PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(
PrioritizedEsThreadPoolExecutor.java:134)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey
Bailey] exception caught on transport layer [[id: 0x9e80cd79]], closing
connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char
in yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel
the Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel
the Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker
]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel
the Air-Walker] stopping ...
[<span style="color: #066;" class=

...

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c92b0453-6697-479b-9bb1-57a5246195a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Inline below ...

On Sun, Oct 12, 2014 at 5:28 AM, Zoran Jeremic zoran.jeremic@gmail.com
wrote:

Hi Norberto,

Thank you for your advices. This is really helpful, since I have never
used elasticsearch in the cluster before, and never had went live with a
number of users. My previous experience was on ES single node and very
small number of users, so I'm still concern how this will work. The main
problem is that I don't know how many users I could expect, so I should be
ready to expand the cluster if it's necessary.

Sure - that's one of the nice things about ES , and AWS - you can keep
tuning as you go...

So far, I created a cluster of 3 m3.large instances having 3 indexes (5
shards and 2 replicas).
I couldn't manage to connect it with ec2 autodiscovery. The only option
that worked for me is having one node that will be referred from other
nodes as unicast host. I think it might work if I have one node that will
always been on.

build for failure.

You were right about having a keys in config. I didn't need it. Can I also
remove this from my java application? I guess it could be removed if launch
configuration contains IAM instance profile.

I don't know why your app needs AWS credentials, so I cannot really answer
that - but, in general, if the AWS library you use supports IAM profiles
then you should be able to remove hardcoded creds. YMMV.

I also decreased zen discovery timeout to 3s.

  • your master config shows master false... You want the master with
    master =true and data = false... Obviously you want more than one master (
    if you don't have too much load start with all nodes available as data and
    master, then separate functionality as needed). Don't forget to set the
    minimum expected # nodes to n-master/2+1 to prevent split brain scenarios.
    I've set all 3 nodes as master and data, but I'm not sure that I
    understand what is the advantage of having nodes that are not master nodes.
    I know these nodes will not be elected as master, but what is the idea for
    that, and what would I get if I set master not to have data on it? Would it
    increase performance?

TL;DR - scalability, performance : There are certain operations which need
to be performed by master node in a timely . If your node is already too
busy handling searches, 'master operations' will suffer( and your whole
cluster will slow down ).

It is much cheaper to run separate, smaller master (and load balancer )
nodes , separate from your data nodes, than to scale up + out your data
nodes to handle all the operations.

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)
How will ES node behave in Amazon auto-scale and could it be used like I'm
using auto scaling to meet high load? If I already have set 5 shards and 2
replicas on previous 3 nodes, will these shards and replicas be moved to
new nodes, and how long it might take for this? If this is what is going
on, I guess it's not good idea to auto-scale new ES node when I have a high
intensity of ES use, and then to turn it off later.

yeah, that's definitely not something that will always work with
autoscaling.

  • You can use autoscaling to ensure the minimum # of nodes is defined (ie,
    automatic rebuild of killed node).
  • if you know you have, say, 8 hours with 50% more traffic, you can
    increase the number of nodes some time before peak, increase # of replicas
    .... after the peak, reduce replica # and remove nodes... Not autoscaling
    per se, but building from the get go without hardcoded hostnames will help
    you do things like this.

btw, you also want to play with routing awareness, so your replicas are
distributed across different AZ.

AND beware of cost of inter-AZ traffic :slight_smile: ( yes, it conflicts with the 'AZ
routing awareness')

Sorry if these questions are too naive.

:slight_smile: not at all!

good luck

Thanks,
Zoran

On Friday, 10 October 2014 20:43:02 UTC-7, Norberto Meijome wrote:

Zoran, good to hear it is working now.

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)

  • make sure u have the ec2 plugin installed.

  • if you use iam profiles, you don't need a key specified in the config
    (this will override the key from the Profile). Also make sure you
    manually test your profile is applied properly ( AWS CLI is a good
    agnostic tool for this).

  • reduce the zen discovery timeout - it seems that it will always start w
    zen then failover to ec2 and it can take 30secs or so to timeout... ( maybe
    it was my bad config, I used to have zen when I was moving from unicast to
    ec2 disco ...I don't remember finding an option to disabling zen disco).

  • the default logs should show you enough info to debug any of this.

  • your master config shows master false... You want the master with
    master =true and data = false... Obviously you want more than one master (
    if you don't have too much load start with all nodes available as data and
    master, then separate functionality as needed). Don't forget to set the
    minimum expected # nodes to n-master/2+1 to prevent split brain scenarios.
    On 11/10/2014 1:38 pm, "Zoran Jeremic" zoran....@gmail.com wrote:

Hi David,

Thank you for your advices. It really helped me to solve the issue and
make it works.
At the end I had to leave these two:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.185.210.54[9300-9400]","
10.101.176.236[9300-9400]"]

and to remove:
network.publish_host: 255.255.255.255

And it got work finally. What turned to be the biggest problem is what
you mentioned at the beginning, missing spaces after ":", missing spaces at
the beginning of line and some extra spaces after #. I thought that : is
delimiter, and it doesn't have to be followed by space. Strange thing is
that if I have such problems in elasticsearch.yml, there is no logs that
indicates that there is some problem. It doesn't log anything and can't
start elasticsearch, or just ignore wrong properties.

Thanks,
Zoran

On Friday, 10 October 2014 14:11:00 UTC-7, David Pilato wrote:

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic zoran....@gmail.com a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey
Bailey] exception caught on transport layer [[id: 0x5541474b]], closing
connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientS
ocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientS
ocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.connect(
Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.conne
ct(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.
connect(ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.
connect(ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.connectToCh
annels(NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.connectToNo
de(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.connectToNo
de(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(T
ransportService.java:129)
at org.elasticsearch.cluster.service.InternalClusterService$Upd
ateTask.run(InternalClusterService.java:404)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThread
PoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedE
sThreadPoolExecutor.java:134)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
Executor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
lExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey
Bailey] exception caught on transport layer [[id: 0x9e80cd79]], closing
connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char
in yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel
the Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying
updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel
the Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-
Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
does not exist.
[2014-10-10 17:22:04,288][INFO ][node ] [Gabriel
the Air-Walker] stopping ...
[<span style="color: #066;" class=

...

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/ms
gid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40goo
glegroups.com
https://groups.google.com/d/msgid/elasticsearch/5591c760-1a82-46ca-ab22-98fb6982da95%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/69b2d7a9-0d8c-472b-8c69-2af60d15b501%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/c92b0453-6697-479b-9bb1-57a5246195a4%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/c92b0453-6697-479b-9bb1-57a5246195a4%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
Norberto 'Beto' Meijome

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CACj2-4%2B2vqa6R7%3DB3NPbEFv007QEYG5xJFaVYe9XChB4DEP%3DJQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi Norberto,

Thank you so much for great advices.

As I'm starting tomorrow with real users, I'll keep for now the
configuration I have at the moment (all nodes have master=true, data=true).
I hope it will work.
For the zone availability, I had to go with everything in one zone. Main
reason was the problem to connect ELB controlled application instances with
backend instances (MySQL, MongoDB and Elasticsearch). It's not possible to
add rule to the backend instances having port+elb security group if
instances are in different zones, so I had to keep everything in one zone.
The other reason was as you mentioned price.

Thanks,
Zoran

On Sunday, 12 October 2014 16:50:34 UTC-7, Norberto Meijome wrote:

Inline below ...

On Sun, Oct 12, 2014 at 5:28 AM, Zoran Jeremic <zoran....@gmail.com
<javascript:>> wrote:

Hi Norberto,

Thank you for your advices. This is really helpful, since I have never
used elasticsearch in the cluster before, and never had went live with a
number of users. My previous experience was on ES single node and very
small number of users, so I'm still concern how this will work. The main
problem is that I don't know how many users I could expect, so I should be
ready to expand the cluster if it's necessary.

Sure - that's one of the nice things about ES , and AWS - you can keep
tuning as you go...

So far, I created a cluster of 3 m3.large instances having 3 indexes (5
shards and 2 replicas).
I couldn't manage to connect it with ec2 autodiscovery. The only option
that worked for me is having one node that will be referred from other
nodes as unicast host. I think it might work if I have one node that will
always been on.

build for failure.

You were right about having a keys in config. I didn't need it. Can I also
remove this from my java application? I guess it could be removed if launch
configuration contains IAM instance profile.

I don't know why your app needs AWS credentials, so I cannot really answer
that - but, in general, if the AWS library you use supports IAM profiles
then you should be able to remove hardcoded creds. YMMV.

I also decreased zen discovery timeout to 3s.

  • your master config shows master false... You want the master with
    master =true and data = false... Obviously you want more than one master (
    if you don't have too much load start with all nodes available as data and
    master, then separate functionality as needed). Don't forget to set the
    minimum expected # nodes to n-master/2+1 to prevent split brain scenarios.
    I've set all 3 nodes as master and data, but I'm not sure that I
    understand what is the advantage of having nodes that are not master nodes.
    I know these nodes will not be elected as master, but what is the idea for
    that, and what would I get if I set master not to have data on it? Would it
    increase performance?

TL;DR - scalability, performance : There are certain operations which need
to be performed by master node in a timely . If your node is already too
busy handling searches, 'master operations' will suffer( and your whole
cluster will slow down ).

It is much cheaper to run separate, smaller master (and load balancer )
nodes , separate from your data nodes, than to scale up + out your data
nodes to handle all the operations.

Elasticsearch Platform — Find real-time answers at scale | Elastic

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)
How will ES node behave in Amazon auto-scale and could it be used like I'm
using auto scaling to meet high load? If I already have set 5 shards and 2
replicas on previous 3 nodes, will these shards and replicas be moved to
new nodes, and how long it might take for this? If this is what is going
on, I guess it's not good idea to auto-scale new ES node when I have a high
intensity of ES use, and then to turn it off later.

yeah, that's definitely not something that will always work with
autoscaling.

  • You can use autoscaling to ensure the minimum # of nodes is defined
    (ie, automatic rebuild of killed node).
  • if you know you have, say, 8 hours with 50% more traffic, you can
    increase the number of nodes some time before peak, increase # of replicas
    .... after the peak, reduce replica # and remove nodes... Not autoscaling
    per se, but building from the get go without hardcoded hostnames will help
    you do things like this.

btw, you also want to play with routing awareness, so your replicas are
distributed across different AZ.

AND beware of cost of inter-AZ traffic :slight_smile: ( yes, it conflicts with the 'AZ
routing awareness')

Sorry if these questions are too naive.

:slight_smile: not at all!

good luck

Thanks,
Zoran

On Friday, 10 October 2014 20:43:02 UTC-7, Norberto Meijome wrote:

Zoran, good to hear it is working now.

It should work pretty well with ec2 auto discovery - unicast is a good
starting point but unless you are statically assigning them via cloud
formation (or manually?), it may not be worth the trouble (and it stops you
from dynamically scaling your cluster)

  • make sure u have the ec2 plugin installed.

  • if you use iam profiles, you don't need a key specified in the config
    (this will override the key from the Profile). Also make sure you
    manually test your profile is applied properly ( AWS CLI is a good
    agnostic tool for this).

  • reduce the zen discovery timeout - it seems that it will always start w
    zen then failover to ec2 and it can take 30secs or so to timeout... ( maybe
    it was my bad config, I used to have zen when I was moving from unicast to
    ec2 disco ...I don't remember finding an option to disabling zen disco).

  • the default logs should show you enough info to debug any of this.

  • your master config shows master false... You want the master with master
    =true and data = false... Obviously you want more than one master ( if you
    don't have too much load start with all nodes available as data and master,
    then separate functionality as needed). Don't forget to set the minimum
    expected # nodes to n-master/2+1 to prevent split brain scenarios.
    On 11/10/2014 1:38 pm, "Zoran Jeremic" zoran....@gmail.com wrote:

Hi David,

Thank you for your advices. It really helped me to solve the issue and
make it works.
At the end I had to leave these two:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.185.210.54[9300-9400]","
10.101.176.236[9300-9400]"]

and to remove:
network.publish_host: 255.255.255.255

And it got work finally. What turned to be the biggest problem is what you
mentioned at the beginning, missing spaces after ":", missing spaces at the
beginning of line and some extra spaces after #. I thought that : is
delimiter, and it doesn't have to be followed by space. Strange thing is
that if I have such problems in elasticsearch.yml, there is no logs that
indicates that there is some problem. It doesn't log anything and can't
start elasticsearch, or just ignore wrong properties.

Thanks,
Zoran

On Friday, 10 October 2014 14:11:00 UTC-7, David Pilato wrote:

Not sure but may be related to public/private IP.
May be debug logs will give you more insights?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 22:40, Zoran Jeremic zoran....@gmail.com a écrit :

Hi David,

Thank you for your quick response. That was great guess about the space
after ":". It was really something that made a problem, so I'm now a step
forward. It seems that it's trying to establish the connection, but there
are a plenty of exceptions stating that Nework is unreachable. Why this
exception if I can telnet between nodes on 9300?

[2014-10-10 20:22:12,184][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x5541474b]], closing connection
java.net.SocketException: Network is unreachable
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientS
ocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientS
ocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
sendDownstream(DefaultChannelPipeline.java:574)
at org.elasticsearch.common.netty.channel.Channels.connect(
Channels.java:634)
at org.elasticsearch.common.netty.channel.AbstractChannel.conne
ct(AbstractChannel.java:207)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:229)
at org.elasticsearch.common.netty.bootstrap.ClientBootstrap.connect(
ClientBootstrap.java:182)
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(
NettyTransport.java:705)
at org.elasticsearch.transport.netty.NettyTransport.connectToNo
de(NettyTransport.java:647)
at org.elasticsearch.transport.netty.NettyTransport.connectToNo
de(NettyTransport.java:615)
at org.elasticsearch.transport.TransportService.connectToNode(T
ransportService.java:129)
at org.elasticsearch.cluster.service.InternalClusterService$Upd
ateTask.run(InternalClusterService.java:404)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThread
PoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedE
sThreadPoolExecutor.java:134)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
Executor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
lExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-10-10 20:22:12,185][WARN ][transport.netty ] [Joey Bailey]
exception caught on transport layer [[id: 0x9e80cd79]], closing connection

On Friday, 10 October 2014 12:21:18 UTC-7, David Pilato wrote:

I might be wrong but may be you should add a space after each ":" char in
yml file.

It sounds like multicast is not disabled and that ec2 discovery is not
used.

Some lines should not be added:

Multicast disable
Unicast list of nodes

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 10 oct. 2014 à 19:57, Zoran Jeremic zoran....@gmail.com a écrit :

Hi guys,

I need an urgent help to setup Elasticsearch cluster on Amazon EC2
instances as I have to launch an application within a week. I'm trying this
for the last three days without success. I tried to follow many
instructions, created instances all over again and still nothing. I can
telnet instances on 9300. I added security group ES2 having a port range
0-65535 and also individual instances by private IP addresses with range
9200-9400. Nodes can't discover each other,and it seems that both nodes are
created on their own regardless the fact that cluster node info indicates
that good elasticsearch.yml is used. For example, cluster name is the one I
added in elasticsearch.yml, but node name is generic one.
I hope somebody will have some idea if I missed something here.

Here are other details:

My IAM policy is:
###########################

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1412960658000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
}
]
}

Cluster configurations are as follows:
###################################################
######################Master node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

###############################Slave node configuration

cluster.name: elasticsearch
node.name: "Slave_node"
node.master: false

discovery.ec2.availability_zones: us-east-1
discovery.ec2.ping_timeout: 30s
cloud.aws.protocol:http
plugin.mandatory:cloud-aws
discovery.zen.ping.multicast.enabled:false
discovery.ec2.groups:ES2
#discovery.ec2.tag.type:ElasticsearchCluster
network.publish_host:255.255.255.255
discovery.type:ec2
cloud.aws.access_key:
cloud.aws.secret_key:
discovery.zen.ping.unicast.hosts:["10.185.210.54[9300-9400]",
"10.101.176.236[9300-9400]"]
cloud.node.auto_attributes:true

#############################################################
##############TRACE LOG FROM SLAVE NODE
[2014-10-10 17:21:30,554][INFO ][node ] [Gabriel the
Air-Walker] started
[2014-10-10
17:21:30,554][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [updating local node id]: done applying updated
cluster_state (version: 3)
[2014-10-10 17:21:40,504][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: execute
[2014-10-10
17:21:40,505][DEBUG][cluster.service ] [Gabriel the
Air-Walker] processing [routing-table-updater]: no change in
cluster_state
[2014-10-10
17:21:44,122][DEBUG][plugins ] [Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/cloud-aws/_site] directory does not
exist.
[2014-10-10 17:21:44,123][DEBUG][plugins ]
[Gabriel the Air-Walker]
[/usr/share/elasticsearch/plugins/mapper-attachments/_site] directory
do

...

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/bb264bb1-e8e8-45af-b10f-d6e2e6a28c99%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I am pretty sure you can open the ports for the sec group the elb belongs
to , regardless of the az. (Az, not region). Unless you r using network
acls.

Anyway, not really ES... pm me if u want to continue the AWS discussion :slight_smile:

On 16/10/2014 3:37 pm, "Zoran Jeremic" zoran.jeremic@gmail.com wrote:

For the zone availability, I had to go with everything in one zone. Main
reason was the problem to connect ELB controlled application instances with
backend instances (MySQL, MongoDB and Elasticsearch). It's not possible to
add rule to the backend instances having port+elb security group if
instances are in different zones, so I had to keep everything in one zone.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CACj2-4JkUOB_VmMyO41%2B1GjEF4S79Z2-doYkVXfjLgSOLowPFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.