New install of elasticsearch (on Lion), status red, unassigned_shards, nothing in the logs

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health, I get
this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but there's
no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with bin/elasticsearch, so
should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see if
that still worked, and that also had unassigned shards, but was typically
in status yellow.
Thanks,
Doug.

Is that only your machine? There seems to be 2 nodes there, is that what
there should be? Its not a Java issue, more seems like a change in the data
location, or maybe something got deleted? Need more info as to the amount
of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.com wrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health, I
get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with bin/elasticsearch, so
should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see if
that still worked, and that also had unassigned shards, but was typically
in status yellow.
Thanks,
Doug.

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball and
ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on what
files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that what
there should be? Its not a Java issue, more seems like a change in the data
location, or maybe something got deleted? Need more info as to the amount
of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.com wrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health, I
get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with bin/elasticsearch,
so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see if
that still worked, and that also had unassigned shards, but was typically
in status yellow.
Thanks,
Doug.

Hi -- I deleted all data and log files, and restarted my machine, and we
seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than it
did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball and
ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on
what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that what
there should be? Its not a Java issue, more seems like a change in the data
location, or maybe something got deleted? Need more info as to the amount
of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.com wrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health, I
get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with bin/elasticsearch,
so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see
if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Something was still strange in your setup, since it reported that there
were two instances running, and I think, based on your feedback, that you
expect only one to run? Just do ps -ef | grep elasticsearch and check how
many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.com wrote:

Hi -- I deleted all data and log files, and restarted my machine, and we
seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than it
did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball
and ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on
what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that what
there should be? Its not a Java issue, more seems like a change in the data
location, or maybe something got deleted? Need more info as to the amount
of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.com wrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health, I
get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with bin/elasticsearch,
so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see
if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not explicitly
set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that there
were two instances running, and I think, based on your feedback, that you
expect only one to run? Just do ps -ef | grep elasticsearch and check how
many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.com wrote:

Hi -- I deleted all data and log files, and restarted my machine, and we
seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than it
did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball
and ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on
what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that
what there should be? Its not a Java issue, more seems like a change in the
data location, or maybe something got deleted? Need more info as to the
amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health,
I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to see
if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Check the logs, see where the second node is coming from (it will show you
the IP for it). Maybe someone else is running ES in your network. You can
disable multicast discovery in the configuration.

On Mon, Nov 28, 2011 at 3:30 PM, doug livesey biot023@gmail.com wrote:

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not
explicitly set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that there
were two instances running, and I think, based on your feedback, that you
expect only one to run? Just do ps -ef | grep elasticsearch and check how
many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.com wrote:

Hi -- I deleted all data and log files, and restarted my machine, and we
seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than it
did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball
and ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on
what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that
what there should be? Its not a Java issue, more seems like a change in the
data location, or maybe something got deleted? Need more info as to the
amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster health,
I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a java
issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to
see if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Hi -- only just spotted this reply, sorry -- I was at choir practice! :slight_smile:
Now I'm at home, and the only computer around is mine. I've just started
the service with the -f option, and this is the output I get:

[2011-11-29 00:19:41,576][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: initializing ...
[2011-11-29 00:19:41,589][INFO ][plugins ] [Storm, Susan]
loaded , sites
[2011-11-29 00:19:44,291][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: initialized
[2011-11-29 00:19:44,303][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: starting ...
[2011-11-29 00:19:44,413][INFO ][transport ] [Storm, Susan]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/192.168.0.2:9300
]}
[2011-11-29 00:19:47,512][INFO ][cluster.service ] [Storm, Susan]
new_master [Storm, Susan][v8UXExqiQxesxSoj8StQTg][inet[/192.168.0.2:9300]],
reason: zen-disco-join (elected_as_master)
[2011-11-29 00:19:47,565][INFO ][discovery ] [Storm, Susan]
elasticsearch/v8UXExqiQxesxSoj8StQTg
[2011-11-29 00:19:47,674][INFO ][http ] [Storm, Susan]
bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/192.168.0.2:9200
]}
[2011-11-29 00:19:47,675][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: started
[2011-11-29 00:19:48,425][INFO ][gateway ] [Storm, Susan]
recovered [2] indices into cluster_state

When I check the health of my cluster, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}

Could it be expecting another server, there?
I'm afraid I'm quite at a loss as to how to investigate this further, so
really appreciate your help, thanks.
Doug.

On 28 November 2011 18:25, Shay Banon kimchy@gmail.com wrote:

Check the logs, see where the second node is coming from (it will show you
the IP for it). Maybe someone else is running ES in your network. You can
disable multicast discovery in the configuration.

On Mon, Nov 28, 2011 at 3:30 PM, doug livesey biot023@gmail.com wrote:

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not
explicitly set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that there
were two instances running, and I think, based on your feedback, that you
expect only one to run? Just do ps -ef | grep elasticsearch and check how
many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.com wrote:

Hi -- I deleted all data and log files, and restarted my machine, and
we seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than it
did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the tarball
and ran it with the defaults.
I have had another version of ES on this machine (under Snow Leopard),
which used /var/data/elasticsearch/ and /var/log/elasticsearch/, but I
thought that the installs were self-contained, and therefore shouldn't
interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me on
what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that
what there should be? Its not a Java issue, more seems like a change in the
data location, or maybe something got deleted? Need more info as to the
amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster
health, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a
java issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to
see if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Now you have a single node, not two (at home). So I think you are joining
another node started in the network at home. The reason why you have 10
shards unassigned and yellow state is because you have indices that are
created with number_of_replicas set to 1, but, since you have just a single
node, it makes not sense to allocate both shard copies on the same node.

On Tue, Nov 29, 2011 at 2:26 AM, doug livesey biot023@gmail.com wrote:

Hi -- only just spotted this reply, sorry -- I was at choir practice! :slight_smile:
Now I'm at home, and the only computer around is mine. I've just started
the service with the -f option, and this is the output I get:

[2011-11-29 00:19:41,576][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: initializing ...
[2011-11-29 00:19:41,589][INFO ][plugins ] [Storm, Susan]
loaded , sites
[2011-11-29 00:19:44,291][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: initialized
[2011-11-29 00:19:44,303][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: starting ...
[2011-11-29 00:19:44,413][INFO ][transport ] [Storm, Susan]
bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/
192.168.0.2:9300]}
[2011-11-29 00:19:47,512][INFO ][cluster.service ] [Storm, Susan]
new_master [Storm, Susan][v8UXExqiQxesxSoj8StQTg][inet[/192.168.0.2:9300]],
reason: zen-disco-join (elected_as_master)
[2011-11-29 00:19:47,565][INFO ][discovery ] [Storm, Susan]
elasticsearch/v8UXExqiQxesxSoj8StQTg
[2011-11-29 00:19:47,674][INFO ][http ] [Storm, Susan]
bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/
192.168.0.2:9200]}
[2011-11-29 00:19:47,675][INFO ][node ] [Storm, Susan]
{0.18.4}[1806]: started
[2011-11-29 00:19:48,425][INFO ][gateway ] [Storm, Susan]
recovered [2] indices into cluster_state

When I check the health of my cluster, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}

Could it be expecting another server, there?
I'm afraid I'm quite at a loss as to how to investigate this further, so
really appreciate your help, thanks.
Doug.

On 28 November 2011 18:25, Shay Banon kimchy@gmail.com wrote:

Check the logs, see where the second node is coming from (it will show
you the IP for it). Maybe someone else is running ES in your network. You
can disable multicast discovery in the configuration.

On Mon, Nov 28, 2011 at 3:30 PM, doug livesey biot023@gmail.com wrote:

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not
explicitly set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that there
were two instances running, and I think, based on your feedback, that you
expect only one to run? Just do ps -ef | grep elasticsearch and check how
many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.comwrote:

Hi -- I deleted all data and log files, and restarted my machine, and
we seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than
it did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the
tarball and ran it with the defaults.
I have had another version of ES on this machine (under Snow
Leopard), which used /var/data/elasticsearch/ and /var/log/elasticsearch/,
but I thought that the installs were self-contained, and therefore
shouldn't interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me
on what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that
what there should be? Its not a Java issue, more seems like a change in the
data location, or maybe something got deleted? Need more info as to the
amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine after
upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster
health, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs, but
there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a
java issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to
see if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

Right, so is there something I should change in my config?
Maybe get rid of the number of replicas? I don't have a network at home --
just my computer and the wifi router, so I guess that's why shards aren't
getting assigned.

On 29 November 2011 07:03, Shay Banon kimchy@gmail.com wrote:

Now you have a single node, not two (at home). So I think you are joining
another node started in the network at home. The reason why you have 10
shards unassigned and yellow state is because you have indices that are
created with number_of_replicas set to 1, but, since you have just a single
node, it makes not sense to allocate both shard copies on the same node.

On Tue, Nov 29, 2011 at 2:26 AM, doug livesey biot023@gmail.com wrote:

Hi -- only just spotted this reply, sorry -- I was at choir practice! :slight_smile:
Now I'm at home, and the only computer around is mine. I've just started
the service with the -f option, and this is the output I get:

[2011-11-29 00:19:41,576][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: initializing ...
[2011-11-29 00:19:41,589][INFO ][plugins ] [Storm,
Susan] loaded , sites
[2011-11-29 00:19:44,291][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: initialized
[2011-11-29 00:19:44,303][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: starting ...
[2011-11-29 00:19:44,413][INFO ][transport ] [Storm,
Susan] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/
192.168.0.2:9300]}
[2011-11-29 00:19:47,512][INFO ][cluster.service ] [Storm,
Susan] new_master [Storm,
Susan][v8UXExqiQxesxSoj8StQTg][inet[/192.168.0.2:9300]], reason:
zen-disco-join (elected_as_master)
[2011-11-29 00:19:47,565][INFO ][discovery ] [Storm,
Susan] elasticsearch/v8UXExqiQxesxSoj8StQTg
[2011-11-29 00:19:47,674][INFO ][http ] [Storm,
Susan] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/
192.168.0.2:9200]}
[2011-11-29 00:19:47,675][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: started
[2011-11-29 00:19:48,425][INFO ][gateway ] [Storm,
Susan] recovered [2] indices into cluster_state

When I check the health of my cluster, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}

Could it be expecting another server, there?
I'm afraid I'm quite at a loss as to how to investigate this further, so
really appreciate your help, thanks.
Doug.

On 28 November 2011 18:25, Shay Banon kimchy@gmail.com wrote:

Check the logs, see where the second node is coming from (it will show
you the IP for it). Maybe someone else is running ES in your network. You
can disable multicast discovery in the configuration.

On Mon, Nov 28, 2011 at 3:30 PM, doug livesey biot023@gmail.com wrote:

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not
explicitly set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that
there were two instances running, and I think, based on your feedback, that
you expect only one to run? Just do ps -ef | grep elasticsearch and check
how many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.comwrote:

Hi -- I deleted all data and log files, and restarted my machine, and
we seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than
it did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the
tarball and ran it with the defaults.
I have had another version of ES on this machine (under Snow
Leopard), which used /var/data/elasticsearch/ and /var/log/elasticsearch/,
but I thought that the installs were self-contained, and therefore
shouldn't interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me
on what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is that
what there should be? Its not a Java issue, more seems like a change in the
data location, or maybe something got deleted? Need more info as to the
amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine
after upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster
health, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs,
but there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a
java issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5) to
see if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.

So I went through the config file and set the number of shards to 1 and the
number of replicas to 0, and that seems to have fixed it.
I use quite a large local index, so I may set the number of shards back up
to 10 or so, but for now that's working nicely.
I also changed the name of the node to "Susan Sarandon" because of that
vampire film she was in with Catherine Deneuve, but I doubt that's had any
effect on this issue. :wink:
Cheers for all your help,
Doug.

On 29 November 2011 10:15, doug livesey biot023@gmail.com wrote:

Right, so is there something I should change in my config?
Maybe get rid of the number of replicas? I don't have a network at home --
just my computer and the wifi router, so I guess that's why shards aren't
getting assigned.

On 29 November 2011 07:03, Shay Banon kimchy@gmail.com wrote:

Now you have a single node, not two (at home). So I think you are joining
another node started in the network at home. The reason why you have 10
shards unassigned and yellow state is because you have indices that are
created with number_of_replicas set to 1, but, since you have just a single
node, it makes not sense to allocate both shard copies on the same node.

On Tue, Nov 29, 2011 at 2:26 AM, doug livesey biot023@gmail.com wrote:

Hi -- only just spotted this reply, sorry -- I was at choir practice! :slight_smile:
Now I'm at home, and the only computer around is mine. I've just started
the service with the -f option, and this is the output I get:

[2011-11-29 00:19:41,576][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: initializing ...
[2011-11-29 00:19:41,589][INFO ][plugins ] [Storm,
Susan] loaded , sites
[2011-11-29 00:19:44,291][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: initialized
[2011-11-29 00:19:44,303][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: starting ...
[2011-11-29 00:19:44,413][INFO ][transport ] [Storm,
Susan] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/
192.168.0.2:9300]}
[2011-11-29 00:19:47,512][INFO ][cluster.service ] [Storm,
Susan] new_master [Storm,
Susan][v8UXExqiQxesxSoj8StQTg][inet[/192.168.0.2:9300]], reason:
zen-disco-join (elected_as_master)
[2011-11-29 00:19:47,565][INFO ][discovery ] [Storm,
Susan] elasticsearch/v8UXExqiQxesxSoj8StQTg
[2011-11-29 00:19:47,674][INFO ][http ] [Storm,
Susan] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/
192.168.0.2:9200]}
[2011-11-29 00:19:47,675][INFO ][node ] [Storm,
Susan] {0.18.4}[1806]: started
[2011-11-29 00:19:48,425][INFO ][gateway ] [Storm,
Susan] recovered [2] indices into cluster_state

When I check the health of my cluster, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 10
}

Could it be expecting another server, there?
I'm afraid I'm quite at a loss as to how to investigate this further, so
really appreciate your help, thanks.
Doug.

On 28 November 2011 18:25, Shay Banon kimchy@gmail.com wrote:

Check the logs, see where the second node is coming from (it will show
you the IP for it). Maybe someone else is running ES in your network. You
can disable multicast discovery in the configuration.

On Mon, Nov 28, 2011 at 3:30 PM, doug livesey biot023@gmail.comwrote:

Yeah, something's up -- it's being very slow.
I get this with ps:

$ ps aux | grep elastic
douglivesey 340 0.1 3.8 3765632 318788 s000 S+ 12:45pm
1:05.07 /usr/bin/java -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError
-Delasticsearch -Des.path.home=/Users/douglivesey/bin/elasticsearch-0.18.4
-Des-foreground=yes -cp
:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/:/Users/douglivesey/bin/elasticsearch-0.18.4/lib/sigar/
org.elasticsearch.bootstrap.Elasticsearch

And the health report is:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 10,
"active_shards" : 20,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

Which is green, but (as you say) seems to think that I have 2 nodes.
Is there something I can do in the config to stop this? I've not
explicitly set any node counts.

On 28 November 2011 12:57, Shay Banon kimchy@gmail.com wrote:

Something was still strange in your setup, since it reported that
there were two instances running, and I think, based on your feedback, that
you expect only one to run? Just do ps -ef | grep elasticsearch and check
how many instances you have...

On Mon, Nov 28, 2011 at 2:51 PM, doug livesey biot023@gmail.comwrote:

Hi -- I deleted all data and log files, and restarted my machine,
and we seem to be green, now.
I'm re-indexing all my items, and it seems to be running slower than
it did, but I'll keep an eye on that & make sure it is.
Thanks for the sanity check! :slight_smile:
Doug.

On 28 November 2011 12:40, doug livesey biot023@gmail.com wrote:

Hi -- it's a new install of ES, configured to use the (new for this
install) data and log directories /var/data/elasticsearch-0.18.4/ and
/var/log/elasticsearch-0.18.4/
It also reported this same error when I first uncompressed the
tarball and ran it with the defaults.
I have had another version of ES on this machine (under Snow
Leopard), which used /var/data/elasticsearch/ and /var/log/elasticsearch/,
but I thought that the installs were self-contained, and therefore
shouldn't interfere with each other?
Maybe I could try a totally fresh install -- could anyone advise me
on what files to delete to achieve this? I thought it would just be the
elasticsearch directory, and any that I'd specified separately.
Thanks,
Doug.

On 28 November 2011 11:22, Shay Banon kimchy@gmail.com wrote:

Is that only your machine? There seems to be 2 nodes there, is
that what there should be? Its not a Java issue, more seems like a change
in the data location, or maybe something got deleted? Need more info as to
the amount of nodes in the cluster, and gist the cluster state.

On Mon, Nov 28, 2011 at 1:01 PM, doug livesey biot023@gmail.comwrote:

Hi -- I have a new install of elasticsearch on my dev machine
after upgrading to OSX Lion.
Now, when I try to start elasticsearch, and check the cluster
health, I get this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 7,
"active_shards" : 14,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 6
}

It stays at these values indefinitely. I've looked in the logs,
but there's no hint of any errors in there.
Can anyone offer any advice as to how to investigate this further?
It's the basic elasticsearch install started up with
bin/elasticsearch, so should be working, as far as I can tell.
I guess, with my machine being upgraded to Lion, this could be a
java issue?
But I'm rather out of my depth with java.
Incidentally, I tried to use my previous install of ES (0.17.5)
to see if that still worked, and that also had unassigned shards, but was
typically in status yellow.
Thanks,
Doug.