How to resolve elasticsearch status red

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

Yuhan Zhang wrote:

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

You should check the master log (this node, since there's only one)
for exceptions or other information about why they are not assigning.

-Drew

My situation is similar. I have not been able to resolve this or find any
solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one
time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active :
Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n
"user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n
"message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log. But
when I shutdown and startup the server, those activities do show up in the
log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

My opinion is that you created some indexes and some of them with no replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents (index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe browe@perceivant.com a écrit :

My situation is similar. I have not been able to resolve this or find any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n "user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n "message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log. But when I shutdown and startup the server, those activities do show up in the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:
Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some recommended reading?

Thank you.

Yuhan

I'm not sure because I really just started ES on 1 server with the default
configuration and started trying to put test data into it. I haven't
tried to add replica or change the configuration. I have created some
indexes. And some that I would like to delete, but I can't PUT any
commands to even delete data. I don't think there has ever been more than
one node, but I really don't even know how to start more than one node. I
only restart the ES server using the init commands as a service. I have 58
unassigned shards, but I don't know how to correct them. I don't really
need any of the data either, I just need to be Green again, so I can start
over, but I would like to figure out what happened so I can not do it again
when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:

My opinion is that you created some indexes and some of them with no
replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents
(index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe browe@perceivant.com a écrit :

My situation is similar. I have not been able to resolve this or find any
solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one
time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active
: Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n
"user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n
"message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log.
But when I shutdown and startup the server, those activities do show up in
the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

So, stop your node and delete data dir.
You will restart a clean node.

Are you sure that nobody else start a node on your LAN?

You should at least change the cluster name if you have coworkers.

David

--

Le 5 août 2012 à 20:48, browe browe@perceivant.com a écrit :

I'm not sure because I really just started ES on 1 server with the default configuration and started trying to put test data into it. I haven't tried to add replica or change the configuration. I have created some indexes. And some that I would like to delete, but I can't PUT any commands to even delete data. I don't think there has ever been more than one node, but I really don't even know how to start more than one node. I only restart the ES server using the init commands as a service. I have 58 unassigned shards, but I don't know how to correct them. I don't really need any of the data either, I just need to be Green again, so I can start over, but I would like to figure out what happened so I can not do it again when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:
My opinion is that you created some indexes and some of them with no replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents (index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe browe@perceivant.com a écrit :

My situation is similar. I have not been able to resolve this or find any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n "user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n "message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log. But when I shutdown and startup the server, those activities do show up in the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:
Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some recommended reading?

Thank you.

Yuhan

yeah.. deleting the data made it green.. So I can start over. but I
certainly couldn't do that if i needed the data in the future when that
happens.. A little scary to keep moving forward with this solution if that
is the way to fix issues... Thank you for the help. I don't think I really
learned anything..

On Sunday, August 5, 2012 2:55:57 PM UTC-4, David Pilato wrote:

So, stop your node and delete data dir.
You will restart a clean node.

Are you sure that nobody else start a node on your LAN?

You should at least change the cluster name if you have coworkers.

David

--

Le 5 août 2012 à 20:48, browe a écrit :

I'm not sure because I really just started ES on 1 server with the default
configuration and started trying to put test data into it. I haven't
tried to add replica or change the configuration. I have created some
indexes. And some that I would like to delete, but I can't PUT any
commands to even delete data. I don't think there has ever been more than
one node, but I really don't even know how to start more than one node. I
only restart the ES server using the init commands as a service. I have 58
unassigned shards, but I don't know how to correct them. I don't really
need any of the data either, I just need to be Green again, so I can start
over, but I would like to figure out what happened so I can not do it again
when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:

My opinion is that you created some indexes and some of them with no
replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents
(index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe a écrit :

My situation is similar. I have not been able to resolve this or find
any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one
time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active
: Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n
"user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n
"message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log.
But when I shutdown and startup the server, those activities do show up in
the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

Let me say that I never went into such issue since I started with ES in production one year ago.

--

Le 5 août 2012 à 21:39, browe browe@perceivant.com a écrit :

yeah.. deleting the data made it green.. So I can start over. but I certainly couldn't do that if i needed the data in the future when that happens.. A little scary to keep moving forward with this solution if that is the way to fix issues... Thank you for the help. I don't think I really learned anything..

On Sunday, August 5, 2012 2:55:57 PM UTC-4, David Pilato wrote:
So, stop your node and delete data dir.
You will restart a clean node.

Are you sure that nobody else start a node on your LAN?

You should at least change the cluster name if you have coworkers.

David

--

Le 5 août 2012 à 20:48, browe a écrit :

I'm not sure because I really just started ES on 1 server with the default configuration and started trying to put test data into it. I haven't tried to add replica or change the configuration. I have created some indexes. And some that I would like to delete, but I can't PUT any commands to even delete data. I don't think there has ever been more than one node, but I really don't even know how to start more than one node. I only restart the ES server using the init commands as a service. I have 58 unassigned shards, but I don't know how to correct them. I don't really need any of the data either, I just need to be Green again, so I can start over, but I would like to figure out what happened so I can not do it again when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:
My opinion is that you created some indexes and some of them with no replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents (index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe a écrit :

My situation is similar. I have not been able to resolve this or find any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: index {[twitter][tweet][1], source[{\n "user" : "kimchy",\n "post_date" : "2009-11-15T14:12:12",\n "message" : "trying out Elastic Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log. But when I shutdown and startup the server, those activities do show up in the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:
Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some recommended reading?

Thank you.

Yuhan

David,
The fact you didn't encounter a certain problem doesn't mean it's not there.
ES definitely has a problem with not loading all the shards after a
cluster/server restart. Restarting enough times in the right order usually
solves the issue. I've already reported it multiple times. Here is the most
recent one:
https://groups.google.com/forum/?fromgroups#!searchin/elasticsearch/from:eran/elasticsearch/jwlyJQ7gg4s/0v-0e0hv7PoJ[1-25]
By the way, it took probably 30 restarts to get the above cluster to load
all the shards.

-eran

On Sunday, August 5, 2012 11:06:35 PM UTC+3, David Pilato wrote:

Let me say that I never went into such issue since I started with ES in
production one year ago.

--

Le 5 août 2012 à 21:39, browe <br...@perceivant.com <javascript:>> a
écrit :

yeah.. deleting the data made it green.. So I can start over. but I
certainly couldn't do that if i needed the data in the future when that
happens.. A little scary to keep moving forward with this solution if that
is the way to fix issues... Thank you for the help. I don't think I really
learned anything..

On Sunday, August 5, 2012 2:55:57 PM UTC-4, David Pilato wrote:

So, stop your node and delete data dir.
You will restart a clean node.

Are you sure that nobody else start a node on your LAN?

You should at least change the cluster name if you have coworkers.

David

--

Le 5 août 2012 à 20:48, browe a écrit :

I'm not sure because I really just started ES on 1 server with the
default configuration and started trying to put test data into it. I
haven't tried to add replica or change the configuration. I have created
some indexes. And some that I would like to delete, but I can't PUT any
commands to even delete data. I don't think there has ever been more than
one node, but I really don't even know how to start more than one node. I
only restart the ES server using the init commands as a service. I have 58
unassigned shards, but I don't know how to correct them. I don't really
need any of the data either, I just need to be Green again, so I can start
over, but I would like to figure out what happened so I can not do it again
when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:

My opinion is that you created some indexes and some of them with no
replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents
(index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe a écrit :

My situation is similar. I have not been able to resolve this or find
any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one
time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0]
active : Timeout waiting for [1m], request: index {[twitter][tweet][1],
source[{\n "user" : "kimchy",\n "post_date" :
"2009-11-15T14:12:12",\n "message" : "trying out Elastic
Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log.
But when I shutdown and startup the server, those activities do show up in
the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

Hi eran,

I cannot visit your link and it redirects to the forum index page. Can
you check the link? As I want to read your reported issue.

And for the "right order required for cluster restart", I think I met
a similar one before:

https://groups.google.com/d/msg/elasticsearch/JNh39-Ccrjo/UZYDgRint00J

for my situation, I also just try to restart each node one by one and
hope the cluster can recover by itself magically.

Thanks,
Wing

On Wed, Aug 8, 2012 at 3:03 PM, Eran eran@gigya-inc.com wrote:

David,
The fact you didn't encounter a certain problem doesn't mean it's not there.
ES definitely has a problem with not loading all the shards after a
cluster/server restart. Restarting enough times in the right order usually
solves the issue. I've already reported it multiple times. Here is the most
recent one:
Redirecting to Google Groups
By the way, it took probably 30 restarts to get the above cluster to load
all the shards.

-eran

On Sunday, August 5, 2012 11:06:35 PM UTC+3, David Pilato wrote:

Let me say that I never went into such issue since I started with ES in
production one year ago.

--

Le 5 août 2012 à 21:39, browe br...@perceivant.com a écrit :

yeah.. deleting the data made it green.. So I can start over. but I
certainly couldn't do that if i needed the data in the future when that
happens.. A little scary to keep moving forward with this solution if that
is the way to fix issues... Thank you for the help. I don't think I really
learned anything..

On Sunday, August 5, 2012 2:55:57 PM UTC-4, David Pilato wrote:

So, stop your node and delete data dir.
You will restart a clean node.

Are you sure that nobody else start a node on your LAN?

You should at least change the cluster name if you have coworkers.

David

--

Le 5 août 2012 à 20:48, browe a écrit :

I'm not sure because I really just started ES on 1 server with the
default configuration and started trying to put test data into it. I
haven't tried to add replica or change the configuration. I have created
some indexes. And some that I would like to delete, but I can't PUT any
commands to even delete data. I don't think there has ever been more than
one node, but I really don't even know how to start more than one node. I
only restart the ES server using the init commands as a service. I have 58
unassigned shards, but I don't know how to correct them. I don't really
need any of the data either, I just need to be Green again, so I can start
over, but I would like to figure out what happened so I can not do it again
when I do need the data.

On Sunday, August 5, 2012 1:58:07 PM UTC-4, David Pilato wrote:

My opinion is that you created some indexes and some of them with no
replica.
You started more than one node in your LAN. Then, you shutdown one node.

ES can not give you a green or yellow health as some of your documents
(index with no replica) can not be seen by ES.

Is that what happened?

David

--

Le 5 août 2012 à 19:26, browe a écrit :

My situation is similar. I have not been able to resolve this or find
any solution online yet..
My cluster health:
{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 22,
"active_shards" : 22,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 58
}
I can do searches, but if I try an XPUT it fails. This did work at one
time.

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elastic Search"
}'
{"error":"UnavailableShardsException[[twitter][2] [2] shardIt, [0]
active : Timeout waiting for [1m], request: index {[twitter][tweet][1],
source[{\n "user" : "kimchy",\n "post_date" :
"2009-11-15T14:12:12",\n "message" : "trying out Elastic
Search"\n}]}]","status":503

What also appears odd is that my failed PUTS don't show up in the log.
But when I shutdown and startup the server, those activities do show up in
the log. ( I deleted the old log to start over and try everything again)

On Thursday, July 12, 2012 2:51:01 PM UTC-4, Yuhan wrote:

Hi all,

I got "No Active Record" exception on some index but works for other.
The health was indicating status=red, and there are unassigned_shards.

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 54,
"active_shards" : 54,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 66
}

what should I do to correct this situration? could someone give some
recommended reading?

Thank you.

Yuhan

I know this is an old thread but I have to jump in:

I just had this happen on my production servers, where the status was red due to a system shutdown on one of my nodes, and I ended up being forced to delete 6GB of data in order to get the status to turn green again. Very frustrating :frowning:

On Sun, 2013-03-17 at 14:39 -0700, inZania wrote:

I know this is an old thread but I have to jump in:

I just had this happen on my production servers, where the status was red
due to a system shutdown on one of my nodes, and I ended up being forced to
delete 6GB of data in order to get the status to turn green again. Very
frustrating :frowning:

You shouldn't need to do this, but given that you haven't provided any
details about your cluster, or the problem that you saw, it's impossible
to provide advice

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I just stumbled on the same issue. I am evaluating and currently using ES
to index logs.
I started a new node by mistake with the same name so it formed a cluster.
I killed the new node but now the original node which indexes logs has
status=red

Is there a way I can fix this without deleting all that data?

Thanks,
-Utkarsh

On Monday, March 18, 2013 2:20:42 AM UTC-7, Clinton Gormley wrote:

On Sun, 2013-03-17 at 14:39 -0700, inZania wrote:

I know this is an old thread but I have to jump in:

I just had this happen on my production servers, where the status was
red
due to a system shutdown on one of my nodes, and I ended up being forced
to
delete 6GB of data in order to get the status to turn green again. Very
frustrating :frowning:

You shouldn't need to do this, but given that you haven't provided any
details about your cluster, or the problem that you saw, it's impossible
to provide advice

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Can you provide details about how you killed your node? If you killed
the process with SIGKILL, you may have damaged your data, but you do not
give much information about the state of your logs, your files, and your
cluster, so it's hard to give advice.

Jörg

Am 02.04.13 21:13, schrieb utkarsh2012@gmail.com:

I started a new node by mistake with the same name so it formed a
cluster. I killed the new node but now the original node which indexes
logs has status=red

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I killed the extra node using SIGKILL.
I am running just one node of Elasticsearch for logstash. What kind of
information would help?

I got this error in logstash error log:
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.action.UnavailableShardsException:
[logstash-2013.03.29][0] [2] shardIt, [0] active : Timeout waiting for
[1m],
request: index {...request..}

My original ES node has a bunch of indexes like: logstash-2013.03.25 and
the new node created a index (say myindex) which I don't really need, I was
just playing around with the other node, didn't expect it will
automatically discover the other node (which is cool!) since it had the
same default cluster name.

Thanks,
-Utkarsh

On Tue, Apr 2, 2013 at 1:17 PM, Jörg Prante joergprante@gmail.com wrote:

Can you provide details about how you killed your node? If you killed the
process with SIGKILL, you may have damaged your data, but you do not give
much information about the state of your logs, your files, and your
cluster, so it's hard to give advice.

Jörg

Am 02.04.13 21:13, schrieb utkarsh2012@gmail.com:

I started a new node by mistake with the same name so it formed a

cluster. I killed the new node but now the original node which indexes logs
has status=red

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/**
topic/elasticsearch/k-**TYhXM8dXQ/unsubscribe?hl=en-UShttps://groups.google.com/d/topic/elasticsearch/k-TYhXM8dXQ/unsubscribe?hl=en-US
**.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
Thanks,
-Utkarsh

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

This message is not from the Elasticsearch cluster, it's from logstash I
assume.

Jörg

Am 02.04.13 22:26, schrieb Utkarsh Sengar:

I got this error in logstash error log:
{:message=>"Failed to index an event, will retry",
:exception=>org.elasticsearch.action.UnavailableShardsException:
[logstash-2013.03.29][0] [2] shardIt, [0] active : Timeout waiting for
[1m],
request: index {...request..}

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

Your case sounds similar to some I faced several times.

By mistake started a new node, so at a certain instance there is one extra
node, and ES automatically starts moving and balancing data between the
nodes.
By the time I notice the extra node, some shards have already moved there,
and just shutting down the node might result in some shards being not
available, thus making the cluster red.

Two solution to the problem (which I follow):

  1. keep the new node up - increase replica count to have a copy of each
    shard in at least one node except this extra one - now shut down this node
  • readjust replica count.
  1. make the node up after assigning certain tag value in the node - issue
    command to exclude shards from this tag - in some time shards will move out
    from this node - shutdown the node.

Not sure whether the same problem has occurred in your case, just thought
of sharing in case it helps.

  • Sujoy.

On Wednesday, April 3, 2013 12:43:43 AM UTC+5:30, utkar...@gmail.com wrote:

I just stumbled on the same issue. I am evaluating and currently using ES
to index logs.
I started a new node by mistake with the same name so it formed a cluster.
I killed the new node but now the original node which indexes logs has
status=red

Is there a way I can fix this without deleting all that data?

Thanks,
-Utkarsh

On Monday, March 18, 2013 2:20:42 AM UTC-7, Clinton Gormley wrote:

On Sun, 2013-03-17 at 14:39 -0700, inZania wrote:

I know this is an old thread but I have to jump in:

I just had this happen on my production servers, where the status was
red
due to a system shutdown on one of my nodes, and I ended up being
forced to
delete 6GB of data in order to get the status to turn green again. Very
frustrating :frowning:

You shouldn't need to do this, but given that you haven't provided any
details about your cluster, or the problem that you saw, it's impossible
to provide advice

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

SO... there is a clean way to resolve this. Although I must say the
Elasticsearch documentation is very very confusing (all these buzzwords
like cluster and zen discovery boggles my mind!)

Now, if you have 2 instances, one in port 9200, and the other in 9201. And
you want ALL the shards to be in 9200.

Run this command to disable allocation in the 9201 instance. You can change
persistent to transient if you want this change to not be permanent. I'd
keep it persistent so this doesn't ever happen again.

curl -XPUT localhost:9201/_cluster/settings -d '{
"persistent" : {
"cluster.routing.allocation.disable_allocation" : true
}
}'

  1. Now, run the command to MOVE all the shards in the 9201 instance to 9200.

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"move" :
{
"index" : "", "shard" : ,
"from_node" : "<ID OF 9201 node>", "to_node" : "<ID of 9200
node>"
}
}
]
}'

You need to run this command for every shard in the 9201 instance (the one
you wanna get rid of).

That's it!

On Wednesday, April 3, 2013 2:36:34 AM UTC-4, Sujoy Sett wrote:

Hi,

Your case sounds similar to some I faced several times.

By mistake started a new node, so at a certain instance there is one extra
node, and ES automatically starts moving and balancing data between the
nodes.
By the time I notice the extra node, some shards have already moved there,
and just shutting down the node might result in some shards being not
available, thus making the cluster red.

Two solution to the problem (which I follow):

  1. keep the new node up - increase replica count to have a copy of each
    shard in at least one node except this extra one - now shut down this node
  • readjust replica count.
  1. make the node up after assigning certain tag value in the node - issue
    command to exclude shards from this tag - in some time shards will move out
    from this node - shutdown the node.

Not sure whether the same problem has occurred in your case, just thought
of sharing in case it helps.

  • Sujoy.

On Wednesday, April 3, 2013 12:43:43 AM UTC+5:30, utkar...@gmail.comwrote:

I just stumbled on the same issue. I am evaluating and currently using ES
to index logs.
I started a new node by mistake with the same name so it formed a
cluster. I killed the new node but now the original node which indexes logs
has status=red

Is there a way I can fix this without deleting all that data?

Thanks,
-Utkarsh

On Monday, March 18, 2013 2:20:42 AM UTC-7, Clinton Gormley wrote:

On Sun, 2013-03-17 at 14:39 -0700, inZania wrote:

I know this is an old thread but I have to jump in:

I just had this happen on my production servers, where the status was
red
due to a system shutdown on one of my nodes, and I ended up being
forced to
delete 6GB of data in order to get the status to turn green again.
Very
frustrating :frowning:

You shouldn't need to do this, but given that you haven't provided any
details about your cluster, or the problem that you saw, it's impossible
to provide advice

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sujoy - if you are bringing up only 1 new node & you have primary & replication shards configured, bring down one node should not make the cluster state to RED..

You may you need to configure your ES nodes not store Primary & replica shards in the same node with the following config in yml file.. This may be issue in your case..
cluster.routing.allocation.same_shard.host: true