Elasticsearch index getting reset


(aj) #1

Hi Guys,

I have a single node elasticsearch instance ( 0.90 version) running on a
single machine ( 8GB RAM, dual core CPU) having RHEL 5.6, java 1_6_00

After having indexed close to 2 million documents, it runs fine for a few
hours and then restarts on its own, wiping out the index in the process. I
now need to reindex all the documents again.

Any ideas on why this happens? Maximum file descriptors is set to 32k and
the number of open file descriptors at any time does not even come close.
So it cant be that.

Here are the modifications i made to the default elasticsearch.yml file :

index.number_of_shards: 5
index.cache.field.type: soft
index.fielddata.cache: soft
index.cache.field.expire: 5m
indices.fielddata.cache.size: 10%
indices.fielddata.cache.expire : 5m
index.store.type: mmapfs
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
action.disable_delete_all_indices: true
script.disable_dynamic: true

I use the elasticsearch service wrapper to start and stop the instance. In
the elasticsearch.conf file, i have set the heap size to 2GB :

set.default.ES_HEAP_SIZE=2048

How do i go about diagnosing the issue?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Lukáš Vlček) #2

Hi,

did you consult ES log files?
Also I would consider switching to Java7.

Regards,
Lukáš
Dne 14.11.2013 0:59 "ajoy" ajoy.sojan@quartzy.com napsal(a):

Hi Guys,

I have a single node elasticsearch instance ( 0.90 version) running on a
single machine ( 8GB RAM, dual core CPU) having RHEL 5.6, java 1_6_00

After having indexed close to 2 million documents, it runs fine for a few
hours and then restarts on its own, wiping out the index in the process. I
now need to reindex all the documents again.

Any ideas on why this happens? Maximum file descriptors is set to 32k and
the number of open file descriptors at any time does not even come close.
So it cant be that.

Here are the modifications i made to the default elasticsearch.yml file :

index.number_of_shards: 5
index.cache.field.type: soft
index.fielddata.cache: soft
index.cache.field.expire: 5m
indices.fielddata.cache.size: 10%
indices.fielddata.cache.expire : 5m
index.store.type: mmapfs
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
action.disable_delete_all_indices: true
script.disable_dynamic: true

I use the elasticsearch service wrapper to start and stop the instance. In
the elasticsearch.conf file, i have set the heap size to 2GB :

set.default.ES_HEAP_SIZE=2048

How do i go about diagnosing the issue?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(aj) #3

ES Log files does not have anything informative.

-- after it crapped out
[2013-11-14 11:22:12,597][DEBUG][action.search.type ] [WEB2] All
shards failed for phase: [query]

-- after i restarted
[2013-11-14 11:28:39,670][INFO ][cluster.metadata ] [WEB2]
[[items_index]] remove_mapping [item]
[2013-11-14 11:29:06,102][DEBUG][action.index ] [WEB2] Sending
mapping updated to master: index [items_index] type [item]
[2013-11-14 11:29:06,107][INFO ][cluster.metadata ] [WEB2]
[items_index] update_mapping [item] (dynamic)

switching to java7 is the next option.

But this kind of behavior is worrysome.
Either it should fail and crash and give a dump, or it should stop
responding.
It shouldnt just wipe out the index and pretend like nothing happened.

On Wednesday, November 13, 2013 11:02:37 PM UTC-8, Lukáš Vlček wrote:

Hi,

did you consult ES log files?
Also I would consider switching to Java7.

Regards,
Lukáš
Dne 14.11.2013 0:59 "ajoy" <ajoy....@quartzy.com <javascript:>> napsal(a):

Hi Guys,

I have a single node elasticsearch instance ( 0.90 version) running on a
single machine ( 8GB RAM, dual core CPU) having RHEL 5.6, java 1_6_00

After having indexed close to 2 million documents, it runs fine for a few
hours and then restarts on its own, wiping out the index in the process. I
now need to reindex all the documents again.

Any ideas on why this happens? Maximum file descriptors is set to 32k and
the number of open file descriptors at any time does not even come close.
So it cant be that.

Here are the modifications i made to the default elasticsearch.yml file :

index.number_of_shards: 5
index.cache.field.type: soft
index.fielddata.cache: soft
index.cache.field.expire: 5m
indices.fielddata.cache.size: 10%
indices.fielddata.cache.expire : 5m
index.store.type: mmapfs
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
action.disable_delete_all_indices: true
script.disable_dynamic: true

I use the elasticsearch service wrapper to start and stop the instance.
In the elasticsearch.conf file, i have set the heap size to 2GB :

set.default.ES_HEAP_SIZE=2048

How do i go about diagnosing the issue?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Ivan Brusic) #4

Are you sure the index was wiped out? Was it deleted or corrupted? Was the
service restarted or was another one started at the same time? It might be
the latter, which would cause another data sub-directory to be created.
How many directories do you have under ${path.data}/${cluster.name}/nodes?

Cheers,

Ivan

On Thu, Nov 14, 2013 at 9:08 AM, ajoy ajoy.sojan@quartzy.com wrote:

ES Log files does not have anything informative.

-- after it crapped out
[2013-11-14 11:22:12,597][DEBUG][action.search.type ] [WEB2] All
shards failed for phase: [query]

-- after i restarted
[2013-11-14 11:28:39,670][INFO ][cluster.metadata ] [WEB2]
[[items_index]] remove_mapping [item]
[2013-11-14 11:29:06,102][DEBUG][action.index ] [WEB2] Sending
mapping updated to master: index [items_index] type [item]
[2013-11-14 11:29:06,107][INFO ][cluster.metadata ] [WEB2]
[items_index] update_mapping [item] (dynamic)

switching to java7 is the next option.

But this kind of behavior is worrysome.
Either it should fail and crash and give a dump, or it should stop
responding.
It shouldnt just wipe out the index and pretend like nothing happened.

On Wednesday, November 13, 2013 11:02:37 PM UTC-8, Lukáš Vlček wrote:

Hi,

did you consult ES log files?
Also I would consider switching to Java7.

Regards,
Lukáš
Dne 14.11.2013 0:59 "ajoy" ajoy....@quartzy.com napsal(a):

Hi Guys,

I have a single node elasticsearch instance ( 0.90 version) running on a
single machine ( 8GB RAM, dual core CPU) having RHEL 5.6, java 1_6_00

After having indexed close to 2 million documents, it runs fine for a
few hours and then restarts on its own, wiping out the index in the
process. I now need to reindex all the documents again.

Any ideas on why this happens? Maximum file descriptors is set to 32k
and the number of open file descriptors at any time does not even come
close. So it cant be that.

Here are the modifications i made to the default elasticsearch.yml file :

index.number_of_shards: 5
index.cache.field.type: soft
index.fielddata.cache: soft
index.cache.field.expire: 5m
indices.fielddata.cache.size: 10%
indices.fielddata.cache.expire : 5m
index.store.type: mmapfs
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
action.disable_delete_all_indices: true
script.disable_dynamic: true

I use the elasticsearch service wrapper to start and stop the instance.
In the elasticsearch.conf file, i have set the heap size to 2GB :

set.default.ES_HEAP_SIZE=2048

How do i go about diagnosing the issue?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(aj) #5

I am not sure what is happening. 2 things are clearly different.

  1. docs count come down from 2 million to less than a 100.
  2. store size comes down to around 100k ( from 690 MB)

I am running the service as a console application. So I am sure it was not
stopped at any poiint.
I have one folder under ${path.data}/${cluster.name}/nodes - '0/'

I had added the following to the config file :

   *action.disable_delete_all_indices: true*
  •   script.disable_dynamic: true* 
    

In the most recent occurence, I observed that the store size remains at 690
MB, but the docs count went down to 80. Almost as if the index is getting
reset somehow.

On Friday, November 15, 2013 7:41:53 AM UTC-8, Ivan Brusic wrote:

Are you sure the index was wiped out? Was it deleted or corrupted? Was the
service restarted or was another one started at the same time? It might be
the latter, which would cause another data sub-directory to be created.
How many directories do you have under ${path.data}/${cluster.name}/
nodes?

Cheers,

Ivan

On Thu, Nov 14, 2013 at 9:08 AM, ajoy <ajoy....@quartzy.com <javascript:>>wrote:

ES Log files does not have anything informative.

-- after it crapped out
[2013-11-14 11:22:12,597][DEBUG][action.search.type ] [WEB2] All
shards failed for phase: [query]

-- after i restarted
[2013-11-14 11:28:39,670][INFO ][cluster.metadata ] [WEB2]
[[items_index]] remove_mapping [item]
[2013-11-14 11:29:06,102][DEBUG][action.index ] [WEB2]
Sending mapping updated to master: index [items_index] type [item]
[2013-11-14 11:29:06,107][INFO ][cluster.metadata ] [WEB2]
[items_index] update_mapping [item] (dynamic)

switching to java7 is the next option.

But this kind of behavior is worrysome.
Either it should fail and crash and give a dump, or it should stop
responding.
It shouldnt just wipe out the index and pretend like nothing happened.

On Wednesday, November 13, 2013 11:02:37 PM UTC-8, Lukáš Vlček wrote:

Hi,

did you consult ES log files?
Also I would consider switching to Java7.

Regards,
Lukáš
Dne 14.11.2013 0:59 "ajoy" ajoy....@quartzy.com napsal(a):

Hi Guys,

I have a single node elasticsearch instance ( 0.90 version) running on
a single machine ( 8GB RAM, dual core CPU) having RHEL 5.6, java 1_6_00

After having indexed close to 2 million documents, it runs fine for a
few hours and then restarts on its own, wiping out the index in the
process. I now need to reindex all the documents again.

Any ideas on why this happens? Maximum file descriptors is set to 32k
and the number of open file descriptors at any time does not even come
close. So it cant be that.

Here are the modifications i made to the default elasticsearch.yml file
:

index.number_of_shards: 5
index.cache.field.type: soft
index.fielddata.cache: soft
index.cache.field.expire: 5m
indices.fielddata.cache.size: 10%
indices.fielddata.cache.expire : 5m
index.store.type: mmapfs
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
action.disable_delete_all_indices: true
script.disable_dynamic: true

I use the elasticsearch service wrapper to start and stop the instance.
In the elasticsearch.conf file, i have set the heap size to 2GB :

set.default.ES_HEAP_SIZE=2048

How do i go about diagnosing the issue?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #6