I have a single ES AWS instance that I've been logging to as part of a
standard ELK stack. It has been running for about 4 weeks. I had
replication disabled.
Today, I decided to stop and start the instance so that I could increase
the memory size of the it. When it came back up, only about half of the 5
shards for each index were assigned...in some cases two, and in some cases
3.
After fooling around a bunch, I looked on the disk, where I found that each
of my indexes were stored within two high-level directories,
'/data1/elasticsearch/es-vpc3/nodes/0' and
/data1/elasticsearch/es-vpc3/nodes/1'. It is the shards stored in the '0'
directory that were being assigned. The shards stored in '1' were not. This
indicated to me that I must have been running two ES instances on the one
instance without knowing it. So I figured, 'what the heck', and I started a
second copy of ES. Sure enough, my other shards were assigned to the second
instance!
Here's the problem though. I've been using the HEAD plugin to view my
cluster. Prior to the reboot, the display represented the cluster as a
single ES instance with all of the shards being shown together in a single
row. Now, I get two rows, one for each instance of ES, with the appropriate
nodes for what I saw in the directory structure shown in each of the two
rows. So something is clearly different than it was before. It appears that
I was not running two distinct instances of ES. So what was I doing? Why
did my indexes get split across two "nodes/N" directories and why upon
reboot did only the "nodes/0" shards get assigned?
Can someone tell me what is going on here....what was different about my
setup before and after the reboot? Surely just giving the machine more
memory couldn't have caused this, right?
When shards are evenly distributed, they won't move again.
What you should do is to start the two nodes, set replica to 1, the kill node 2 and set replica to 0.
I have a single ES AWS instance that I've been logging to as part of a standard ELK stack. It has been running for about 4 weeks. I had replication disabled.
Today, I decided to stop and start the instance so that I could increase the memory size of the it. When it came back up, only about half of the 5 shards for each index were assigned...in some cases two, and in some cases 3.
After fooling around a bunch, I looked on the disk, where I found that each of my indexes were stored within two high-level directories, '/data1/elasticsearch/es-vpc3/nodes/0' and /data1/elasticsearch/es-vpc3/nodes/1'. It is the shards stored in the '0' directory that were being assigned. The shards stored in '1' were not. This indicated to me that I must have been running two ES instances on the one instance without knowing it. So I figured, 'what the heck', and I started a second copy of ES. Sure enough, my other shards were assigned to the second instance!
Here's the problem though. I've been using the HEAD plugin to view my cluster. Prior to the reboot, the display represented the cluster as a single ES instance with all of the shards being shown together in a single row. Now, I get two rows, one for each instance of ES, with the appropriate nodes for what I saw in the directory structure shown in each of the two rows. So something is clearly different than it was before. It appears that I was not running two distinct instances of ES. So what was I doing? Why did my indexes get split across two "nodes/N" directories and why upon reboot did only the "nodes/0" shards get assigned?
Can someone tell me what is going on here....what was different about my setup before and after the reboot? Surely just giving the machine more memory couldn't have caused this, right?
I don’t understand what you’re saying, or maybe you don’t understand what happened. Before the restart, my nodes weren’t distributed at all. All 5 shards for each index were one one node, and replication was set to 0. I restarted the instance, and all of that was still true, but half of the nodes got assigned back to the one and only node and the other half remained unassigned. I’d like to understand why a reboot led to a different state, a state that left the system unusable.
I played with the reroute API a bit. That doesn’t help because I lose the shard data if I force the unassigned shards back onto the one node…they end up empty if I do that.
Maybe you’re only telling me how to recover given that creating a second node on the same instance has gotten all my data back online. If so, thanks for that. I’ll give what you’re saying a try to see if that helps. My bigger concern is why I have to go through this, mainly because I’m worried it will be required every time I restart this instance. I need to understand the issue here.
Steve
On Feb 5, 2015, at 1:58 PM, David Pilato david@pilato.fr wrote:
When shards are evenly distributed, they won't move again.
What you should do is to start the two nodes, set replica to 1, the kill node 2 and set replica to 0.
I have a single ES AWS instance that I've been logging to as part of a standard ELK stack. It has been running for about 4 weeks. I had replication disabled.
Today, I decided to stop and start the instance so that I could increase the memory size of the it. When it came back up, only about half of the 5 shards for each index were assigned...in some cases two, and in some cases 3.
After fooling around a bunch, I looked on the disk, where I found that each of my indexes were stored within two high-level directories, '/data1/elasticsearch/es-vpc3/nodes/0' and /data1/elasticsearch/es-vpc3/nodes/1'. It is the shards stored in the '0' directory that were being assigned. The shards stored in '1' were not. This indicated to me that I must have been running two ES instances on the one instance without knowing it. So I figured, 'what the heck', and I started a second copy of ES. Sure enough, my other shards were assigned to the second instance!
Here's the problem though. I've been using the HEAD plugin to view my cluster. Prior to the reboot, the display represented the cluster as a single ES instance with all of the shards being shown together in a single row. Now, I get two rows, one for each instance of ES, with the appropriate nodes for what I saw in the directory structure shown in each of the two rows. So something is clearly different than it was before. It appears that I was not running two distinct instances of ES. So what was I doing? Why did my indexes get split across two "nodes/N" directories and why upon reboot did only the "nodes/0" shards get assigned?
Can someone tell me what is going on here....what was different about my setup before and after the reboot? Surely just giving the machine more memory couldn't have caused this, right?
I don’t know if this helps at all, but the latest index that got created for the new day, with me running two nodes now, all got assigned to one of the two nodes.
Steve
On Feb 5, 2015, at 1:58 PM, David Pilato david@pilato.fr wrote:
When shards are evenly distributed, they won't move again.
What you should do is to start the two nodes, set replica to 1, the kill node 2 and set replica to 0.
I have a single ES AWS instance that I've been logging to as part of a standard ELK stack. It has been running for about 4 weeks. I had replication disabled.
Today, I decided to stop and start the instance so that I could increase the memory size of the it. When it came back up, only about half of the 5 shards for each index were assigned...in some cases two, and in some cases 3.
After fooling around a bunch, I looked on the disk, where I found that each of my indexes were stored within two high-level directories, '/data1/elasticsearch/es-vpc3/nodes/0' and /data1/elasticsearch/es-vpc3/nodes/1'. It is the shards stored in the '0' directory that were being assigned. The shards stored in '1' were not. This indicated to me that I must have been running two ES instances on the one instance without knowing it. So I figured, 'what the heck', and I started a second copy of ES. Sure enough, my other shards were assigned to the second instance!
Here's the problem though. I've been using the HEAD plugin to view my cluster. Prior to the reboot, the display represented the cluster as a single ES instance with all of the shards being shown together in a single row. Now, I get two rows, one for each instance of ES, with the appropriate nodes for what I saw in the directory structure shown in each of the two rows. So something is clearly different than it was before. It appears that I was not running two distinct instances of ES. So what was I doing? Why did my indexes get split across two "nodes/N" directories and why upon reboot did only the "nodes/0" shards get assigned?
Can someone tell me what is going on here....what was different about my setup before and after the reboot? Surely just giving the machine more memory couldn't have caused this, right?
I don’t understand what you’re saying, or maybe you don’t understand what
happened. Before the restart, my nodes weren’t distributed at all. All 5
shards for each index were one one node, and replication was set to 0. I
restarted the instance, and all of that was still true, but half of the
nodes got assigned back to the one and only node and the other half
remained unassigned. I’d like to understand why a reboot led to a
different state, a state that left the system unusable.
I played with the reroute API a bit. That doesn’t help because I lose the
shard data if I force the unassigned shards back onto the one node…they end
up empty if I do that.
Maybe you’re only telling me how to recover given that creating a second
node on the same instance has gotten all my data back online. If so,
thanks for that. I’ll give what you’re saying a try to see if that helps.
My bigger concern is why I have to go through this, mainly because I’m
worried it will be required every time I restart this instance. I need to
understand the issue here.
I don’t know if this helps at all, but the latest index that got created
for the new day, with me running two nodes now, all got assigned to one of
the two nodes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.