Unassigned_shards problam

so..
I decided to create a three nodes cluster (two data nodes and one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the same
    cluster..

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daniel,

It looks like you nodes do not discover each other - that number of nodes
should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the same
    cluster..

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of nodes
should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the
    same cluster..

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Do you use the rpm/debia packages or is it an install from a tar ball/ zip?
Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielrastaziv@gmail.com wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of nodes
should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the
    same cluster..

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I installed from tar.

my configuration.yml looks like this:

cluster.name: SCluster
node.name: "SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar ball/
zip? Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR <danielr...@gmail.com<javascript:>

wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the
    same cluster..

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar ball/
zip? Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.com wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the
    same cluster..

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on port
9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ] [SCNode1Master]
{0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ] [SCNode1Master]
loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ] [SCNode1Master]
{0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ] [SCNode1Master]
{0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ] [SCNode1Master]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ] [SCNode1Master]
new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true}, reason:
zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ] [SCNode1Master]
SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ] [SCNode1Master]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address
{inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ] [SCNode1Master]
{0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ] [SCNode1Master]
recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar ball/
zip? Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.com wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get the
following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to the
    same cluster..

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 November 2013 09:51, DanielR danielrastaziv@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on port
9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar ball/
zip? Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.com wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get
the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the arbiter
responding

  • status: "green"

  • witch makes me belive that they are not really connected to
    the same cluster..

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR <danielr...@gmail.com <javascript:>>wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on port
9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar ball/
zip? Can you share the elasticsearch.yml configuration? Also, this can
potentially be overridden using command line parameters (but I assume you
don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.com wrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get
the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected to
    the same cluster..

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 November 2013 09:57, DanielR danielrastaziv@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on port
9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number of
nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get
the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected to
    the same cluster..

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/to
pic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster metadata,
scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastichttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR <danielr...@gmail.com <javascript:>>wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason: zen-disco-join
(elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on port
9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number
of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and one
arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I get
the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected to
    the same cluster..

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 26 November 2013 10:22, DanielR danielrastaziv@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number
of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected
    to the same cluster..

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR <danielr...@gmail.com <javascript:>>wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that number
of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected
    to the same cluster..

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 . Did
you set up a security group for the servers that allows access on port 9300
between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected
    to the same cluster..

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The logs were found on the es_home directory, like you suggested.
And i did opened the ports (9200-9300) on the security group..

Unicast is using the same port, no?

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 08:26, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 . Did
you set up a security group for the servers that allows access on port 9300
between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can find
references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really connected
    to the same cluster..

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daniel,

ES will use 9300 for internal cross node communication (actually 9300-9400
but that's only relevant if you run multiple nodes on a single server).
9200 is used for HTTP request to the API.

Given the fact that the logging settings were not picked up I think the
source of your problem is the settings file. I'd check it for correct
indentation / structure.

Cheers,
Boaz

On Tue, Nov 26, 2013 at 7:29 AM, Daniel Rasta danielrastaziv@gmail.comwrote:

The logs were found on the es_home directory, like you suggested.
And i did opened the ports (9200-9300) on the security group..

Unicast is using the same port, no?

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 08:26, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 .
Did you set up a security group for the servers that allows access on port
9300 between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can
find references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes
wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really
    connected to the same cluster..

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/grou
ps/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sounds great! But is there something I can use to verify my config file?

Also, every other setting entry seems to work just fine..

My main consideration is the shards allocation. Even with the unicast
discovery, they should be allocated.

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 10:28, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

ES will use 9300 for internal cross node communication (actually 9300-9400
but that's only relevant if you run multiple nodes on a single server).
9200 is used for HTTP request to the API.

Given the fact that the logging settings were not picked up I think the
source of your problem is the settings file. I'd check it for correct
indentation / structure.

Cheers,
Boaz

On Tue, Nov 26, 2013 at 7:29 AM, Daniel Rasta danielrastaziv@gmail.comwrote:

The logs were found on the es_home directory, like you suggested.
And i did opened the ports (9200-9300) on the security group..

Unicast is using the same port, no?

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 08:26, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 .
Did you set up a security group for the servers that allows access on port
9300 between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ] [Arbiter]
new_master [Arbiter][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can
find references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a tar
ball/ zip? Can you share the elasticsearch.yml configuration? Also, this
can potentially be overridden using command line parameters (but I assume
you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes
wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes and
one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master node I
get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really
    connected to the same cluster..

--
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/grou
ps/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On ec2, multicast won't work, so you need unicast.
If you use the ec2 plugin (you DO want to use it :wink: ), disable zen
discovery, it seems the ec2 plugin only kicks in after zen had given up (
maybe in the versions I run, but seems pretty consistent).
On 26/11/2013 9:34 PM, "Daniel Rasta" danielrastaziv@gmail.com wrote:

Sounds great! But is there something I can use to verify my config file?

Also, every other setting entry seems to work just fine..

My main consideration is the shards allocation. Even with the unicast
discovery, they should be allocated.

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 10:28, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

ES will use 9300 for internal cross node communication (actually 9300-9400
but that's only relevant if you run multiple nodes on a single server).
9200 is used for HTTP request to the API.

Given the fact that the logging settings were not picked up I think the
source of your problem is the settings file. I'd check it for correct
indentation / structure.

Cheers,
Boaz

On Tue, Nov 26, 2013 at 7:29 AM, Daniel Rasta danielrastaziv@gmail.comwrote:

The logs were found on the es_home directory, like you suggested.
And i did opened the ports (9200-9300) on the security group..

Unicast is using the same port, no?

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 08:26, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 .
Did you set up a security group for the servers that allows access on port
9300 between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ]
[Arbiter] new_master [Arbiter][<SOME_KEY>][inet[/:9300]],
reason: zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can
find references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a
tar ball/ zip? Can you share the elasticsearch.yml configuration? Also,
this can potentially be overridden using command line parameters (but I
assume you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes
wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes
and one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master nodeI get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really
    connected to the same cluster..

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/grou
ps/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CACj2-4%2BwsGLF5cnVPmSSZJLTxdozU_8pwpmCMr5bV6r5YahRpg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daniel,

You can post your elasticsearch.yml as a gist and I'll have a look.

As to the unassigned shards, what's your index settings? How many shards
and how many replicas?

Cheers,
B

On Tue, Nov 26, 2013 at 11:33 AM, Daniel Rasta danielrastaziv@gmail.comwrote:

Sounds great! But is there something I can use to verify my config file?

Also, every other setting entry seems to work just fine..

My main consideration is the shards allocation. Even with the unicast
discovery, they should be allocated.

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 10:28, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

ES will use 9300 for internal cross node communication (actually 9300-9400
but that's only relevant if you run multiple nodes on a single server).
9200 is used for HTTP request to the API.

Given the fact that the logging settings were not picked up I think the
source of your problem is the settings file. I'd check it for correct
indentation / structure.

Cheers,
Boaz

On Tue, Nov 26, 2013 at 7:29 AM, Daniel Rasta danielrastaziv@gmail.comwrote:

The logs were found on the es_home directory, like you suggested.
And i did opened the ports (9200-9300) on the security group..

Unicast is using the same port, no?

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 26 בנוב 2013, at 08:26, Boaz Leskes b.leskes@gmail.com wrote:

Hi Daniel,

The nodes need to be able to communicate with each other on port 9300 .
Did you set up a security group for the servers that allows access on port
9300 between them?

I think you have a configuration issue. You don't need to use unicast
discovery on ec2 but rather the discovery mechanism supplied by the aws
plugin. Let's first find the cause of that and then look at the unassigned
shards as they will probably be caused by the same thing.

Where did you find the logs in the end?

Cheers,
Boaz

On Tuesday, November 26, 2013 12:30:30 AM UTC+1, DanielR wrote:

I've deleted all of my data and now i have "only" 24 unassigned_shards..

On Tuesday, November 26, 2013 1:27:46 AM UTC+2, Mark Walkom wrote:

It's red because you will have unassigned primaries.

Given your previous issues you might want to reload the data into new
indexes and start afresh?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 10:22, DanielR danielr...@gmail.com wrote:

ok.
so i used unicast and let's say i can live with this..

my cluster is acting all weird again...

still a red status and unassigned_shards is 48

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 3,
  • number_of_data_nodes: 3,
  • active_primary_shards: 8,
  • active_shards: 24,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

in my log i can see an entry
"dangling index, exists on local file system, but not in cluster
metadata, scheduling to delete in [2h], auto import to cluster state [YES]"

and i can't create it on the next node because "
IndexAlreadyExistsException"...

so... does the red status means??

On Tuesday, November 26, 2013 12:59:19 AM UTC+2, Mark Walkom wrote:

Then that's the issue.

I don't have experience with EC2 discovery, but you might want to try
unicast discovery to rule that out as the issue -
Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/modules-discovery-zen.htmlhttp://www.google.com/url?q=http%3A%2F%2Fwww.elasticsearch.org%2Fguide%2Fen%2Felasticsearch%2Freference%2Fcurrent%2Fmodules-discovery-zen.html&sa=D&sntz=1&usg=AFQjCNG9-58KP15vouPfc1b49es6XbmwBQ

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:57, DanielR danielr...@gmail.com wrote:

Nope...
Both the arbiter and the second node are flagged with "new_master"

[2013-11-25 22:49:39,212][INFO ][cluster.service ] [Node3]
new_master [Node3][<SOME_KEY>][inet[/:9300]], reason:
zen-disco-join (elected_as_master)

[2013-11-25 22:49:44,079][INFO ][cluster.service ]
[Arbiter] new_master [Arbiter][<SOME_KEY>][inet[/:9300]],
reason: zen-disco-join (elected_as_master)

On Tuesday, November 26, 2013 12:53:59 AM UTC+2, Mark Walkom wrote:

9300 is the port ES uses for intercluster comms, so that's fine.

Browse through the logs on each of your nodes and see if you can
find references to the others joining the master.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.comhttp://www.google.com/url?q=http%3A%2F%2Fwww.campaignmonitor.com&sa=D&sntz=1&usg=AFQjCNFv30c-WBiP6sfBmxXaWBP5YBZg1Q

On 26 November 2013 09:51, DanielR danielr...@gmail.com wrote:

ok :slight_smile:
found the logs
I think maybe cluster.service is trying to find the other nodes on
port 9300...

so.. here it is:

[2013-11-25 21:56:08,072][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initializing ...
[2013-11-25 21:56:08,196][INFO ][plugins ]
[SCNode1Master] loaded [cloud-aws], sites
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: initialized
[2013-11-25 21:56:13,667][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: starting ...
[2013-11-25 21:56:13,849][INFO ][transport ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9300]},
publish_address {inet[/:9300]}
[2013-11-25 21:56:16,962][INFO ][cluster.service ]
[SCNode1Master] new_master [SCNode1Master][<SOME_KEY>][inet[/ter=true},
reason: zen-disco-join (elected_as_master)
[2013-11-25 21:56:17,014][INFO ][discovery ]
[SCNode1Master] SCluster/<SOME_KEY>
[2013-11-25 21:56:17,053][INFO ][http ]
[SCNode1Master] bound_address {inet[/0:0:0:0:0:0:0:0:9200]},
publish_address {inet[/:9200]}
[2013-11-25 21:56:17,053][INFO ][node ]
[SCNode1Master] {0.90.1}[926]: started
[2013-11-25 21:56:17,114][INFO ][gateway ]
[SCNode1Master] recovered [2] indices into cluster_state
~

On Monday, November 25, 2013 3:07:35 PM UTC+2, Boaz Leskes wrote:

Hi Daniel,

Just double checking - you do have space on the logs:
/var/log/elasticsearch? Also check for a logs folder where you untared ES
into.

Cheers,
Boaz

On Friday, November 22, 2013 4:47:23 PM UTC+1, DanielR wrote:

I installed from tar.

my configuration.yml looks like this:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

On Friday, November 22, 2013 3:53:16 PM UTC+2, Boaz Leskes wrote:

Do you use the rpm/debia packages or is it an install from a
tar ball/ zip? Can you share the elasticsearch.yml configuration? Also,
this can potentially be overridden using command line parameters (but I
assume you don't do that).

On Fri, Nov 22, 2013 at 1:59 PM, DanielR danielr...@gmail.comwrote:

ok.. thats embarrassing.. i cant locate the log file in
"/var/log/elasticsearch" even that it's configured in the elasticsearch.yml

On Friday, November 22, 2013 2:39:42 PM UTC+2, Boaz Leskes
wrote:

Hi Daniel,

It looks like you nodes do not discover each other - that
number of nodes should be 3 for you.

I'd look at the logs for clues of why it fails.

Cheers,
Boaz

On Friday, November 22, 2013 1:35:05 PM UTC+1, DanielR wrote:

so..
I decided to create a three nodes cluster (two data nodes
and one arbiter)
All three nodes are using ec2 discovery

but when i'm calling /_cluster/health on the master nodeI get the following response:

{

  • cluster_name: "SCluster",
  • status: "red",
  • timed_out: false,
  • number_of_nodes: 1,
  • number_of_data_nodes: 1,
  • active_primary_shards: 0,
  • active_shards: 0,
  • relocating_shards: 0,
  • initializing_shards: 0,
  • unassigned_shards: 48

}

I can detect two troubling parameters:

  • active_primary_shards: 0
  • AND
  • unassigned_shards: 48

another troubling issue is that the other data node and the
arbiter responding

  • status: "green"

  • witch makes me belive that they are not really
    connected to the same cluster..

--
You received this message because you are subscribed to a
topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/
unsubscribe.
To unsubscribe from this group and all its topics, send an
email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/grou
ps/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/6cYLWYw6XHg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKzwz0ri6rZp29Gcn60LORhvyWr9cfcwCDSnMuz8qqvOoAX3zQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Well.. My index has 8 shards and 2 replicas.

And the config looks like that:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:

SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

10X!

Sent from my iPhone.
דניאל
0546350772
http://il.linkedin.com/in/danielziv

On 27 בנוב 2013, at 10:02, Boaz Leskes b.leskes@gmail.com wrote:

cluster.namehttp://www.google.com/url?q=http%3A%2F%2Fcluster.name&sa=D&sntz=1&usg=AFQjCNGbdss-_rGgAY2He8KM6ZUy2TdG8g:
SCluster
node.namehttp://www.google.com/url?q=http%3A%2F%2Fnode.name&sa=D&sntz=1&usg=AFQjCNE8Cff_rbggu0_UXL6VsgDVF5NFCg:
"SCNode1Master"

node.master: true
index.number_of_shards: 8

index.number_of_replicas: 2
path:
logs: /var/log/elasticsearch

cloud:
aws:
access_key: XXXXXX
secret_key: XXXXXXXXX
discovery:
type: ec2
ec2:
groups: GROUP_NAME

cloud.node.auto_attributes: true

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/3793539901095203409%40unknownmsgid.
For more options, visit https://groups.google.com/groups/opt_out.