Understanding gateway in EC2 environment

Hi

I am ready to deploy elasticsearch cluster into the prod environment.
Before doing that, I want make sure I really don't need S3 gateway.

I set up 6 instances of elasticsearch, with the number of shards 12, 1
replication. When one instance was terminated by the EC2 system check, its
data stored in the local storage was gone but search result was fine
because I have a replication. After new elasticsearch instance warmed up,
I can see the following cluster status:

"active_shards" : 24,
"relocating_shards" : 2,

I didn't find any data in the new instance but how can all 24 shards be active? Also, where are the relocating shards? When does the replication shard be recovered to the new instance?

Do I really not need S3 gateway?

Thank you

Best, Jae

--

Hi Jae,

The cluster will only re-allocate two shards (configurable) at a time.
Which nodes are appearing in the cluster state? Unicast or multicast?
I find the head plugin very useful to visualize the distribution of
shards.

You do not need the S3 gateway. In fact, it is not recommended.

Cheers,

Ivan

On Mon, Sep 17, 2012 at 2:19 PM, Jae metacret@gmail.com wrote:

Hi

I am ready to deploy elasticsearch cluster into the prod environment. Before
doing that, I want make sure I really don't need S3 gateway.

I set up 6 instances of elasticsearch, with the number of shards 12, 1
replication. When one instance was terminated by the EC2 system check, its
data stored in the local storage was gone but search result was fine because
I have a replication. After new elasticsearch instance warmed up, I can see
the following cluster status:

"active_shards" : 24,
"relocating_shards" : 2,

I didn't find any data in the new instance but how can all 24 shards be
active? Also, where are the relocating shards? When does the replication
shard be recovered to the new instance?

Do I really not need S3 gateway?

Thank you

Best, Jae

--

--

The reason why you see 24 active shards when the node was down is because elasticsearch will automatically reallocate the shards that existed on the node that failed on the rest of the cluster.

On Sep 17, 2012, at 11:19 PM, Jae metacret@gmail.com wrote:

Hi

I am ready to deploy elasticsearch cluster into the prod environment. Before doing that, I want make sure I really don't need S3 gateway.

I set up 6 instances of elasticsearch, with the number of shards 12, 1 replication. When one instance was terminated by the EC2 system check, its data stored in the local storage was gone but search result was fine because I have a replication. After new elasticsearch instance warmed up, I can see the following cluster status:

"active_shards" : 24,
"relocating_shards" : 2,

I didn't find any data in the new instance but how can all 24 shards be active? Also, where are the relocating shards? When does the replication shard be recovered to the new instance?

Do I really not need S3 gateway?

Thank you
Best, Jae

--

--

Thank you so much!

Elasticsearc is awesome!

On Tuesday, September 18, 2012 3:03:18 AM UTC-7, kimchy wrote:

The reason why you see 24 active shards when the node was down is because
elasticsearch will automatically reallocate the shards that existed on the
node that failed on the rest of the cluster.

On Sep 17, 2012, at 11:19 PM, Jae <meta...@gmail.com <javascript:>> wrote:

Hi

I am ready to deploy elasticsearch cluster into the prod environment.
Before doing that, I want make sure I really don't need S3 gateway.

I set up 6 instances of elasticsearch, with the number of shards 12, 1
replication. When one instance was terminated by the EC2 system check, its
data stored in the local storage was gone but search result was fine
because I have a replication. After new elasticsearch instance warmed up,
I can see the following cluster status:

"active_shards" : 24,
"relocating_shards" : 2,

I didn't find any data in the new instance but how can all 24 shards be active? Also, where are the relocating shards? When does the replication shard be recovered to the new instance?

Do I really not need S3 gateway?

Thank you

Best, Jae

--

--