Upgrading ES from 1.7.1 to 2.x Issue

Hi all,

yesterday i started a roughly update to ES 2.1 from 1.7.1.
As purposed i made a complete cluster restart.

During the bootup ES throw some WARN which tells me that no master is known for my indices and other errors which tell me i went really unprepared to this major release upgrade.
As is was not able to fix i rolled back to 1.7.1.

Unfortunately ES now lost all my indices. The data is still persistent on the file system.
So how can i can get this data back into my ES?

Any ideas or suggestions?

Cheers,

Ricardo

ps.: I know what i've done was extremly stupid i just hope for anyone maybe knowing some steps retrieving the data.

One of the Warnings thrown.

[2015-11-30 14:10:12,463][INFO ][env ] [node-name] using [1] data paths, mounts [[/data/elasticsearch (/dev/mapper/vg_1-datavol)]], net usable_space [350.2gb], net total_space [1.4tb], types [ext4] [2015-11-30 14:10:15,592][ERROR][gateway.local.state.shards] [node-name] failed to read local state (started shards), exiting... org.elasticsearch.ElasticsearchException: unexpected field in shard state [index_uuid] ......

You can not rollback when you started to upgrade.

You should try to make it work with 2.1 and if not possible I'm afraid you have to reindex.

But may be you ran a Snapshot before starting the upgrade process so you could restore?

Hi, thanks for your fast reply.

Yeah.. as i said i made a really poor performance when "planning" the upgrade, so no snapshot.

i just split out one single node out of the cluster to try reindex.

There is still the issue that even ES 2.1 does not see any indices.
Is there any possibility to put the data back into ES, repair those crushed indices or else?

thanks in advance,

Ricardo

But what is the issue with 2.1? Do you have logs?

The issue is that there are no indices inside ES.
I've about 60 indices containing 1,3 TB data on my file system which do not show up in any way.

Currently i assume that the data is simply wrecked and i'll get my head cut off in the next hours ^^'
As the data does not show up there are not log for those.
It's just like "Hey startup is fine but got no indices, feed me!"

cheers,

Ricardo

Did you check the cluster name? May be elasticsearch is looking in another dir here?
May be path.home is not correctly set?

the settings are:

path.data: /data/elasticsearch

and thats exacly where the data is placed.

cluster.name: adviqo-restore

and below /data/elasticsearch is the directory adviqo-restore with the correct file permissions

And when you start elasticsearch, you don't see anything in logs? May be you could paste them here?

Also, may be change logging level to DEBUG?

After changing the logging to DEBUG:

[index_1] dangling index directory detected, but no state found

Is it the index you are missing? Can you print your cluster state?

curl 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "adviqo-restore",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 4,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

I miss about 70 indices every produces this log entry :frowning:

I'm kind of clueless.

The indices shown are from mavel and kibana which got newly created.

I meant curl -XGET 'http://localhost:9200/_cluster/state?pretty'

Can you print the first 30 lines of your logs (DEBUG)?

{
"cluster_name" : "adviqo-restore",
"version" : 6,
"state_uuid" : "pq6ykO_ITjCz3jaxg1XbGw",
"master_node" : "3PRpIe82SmmOqbhb84FexA",
"blocks" : { },
"nodes" : {
"3PRpIe82SmmOqbhb84FexA" : {
"name" : "Matsu'o Tsurayaba",
"transport_address" : "127.0.0.1:9300",
"attributes" : { }
}
}, ......

==> adviqo-restore.log <==
[2015-12-01 12:54:41,036][DEBUG][index.cache.bitset ] [Adri Nital] [.marvel-es-2015.12.01] clearing all bitsets because [close]
[2015-12-01 12:54:41,036][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] clearing index field data (reason [shutdown])
[2015-12-01 12:54:41,036][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] closing analysis service (reason [shutdown])
[2015-12-01 12:54:41,036][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] closing mapper service (reason [shutdown])
[2015-12-01 12:54:41,037][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] closing index query parser service (reason [shutdown])
[2015-12-01 12:54:41,038][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] closing index service (reason [shutdown])
[2015-12-01 12:54:41,038][DEBUG][indices ] [Adri Nital] [.marvel-es-2015.12.01] closed... (reason [shutdown])
[2015-12-01 12:54:41,038][INFO ][node ] [Adri Nital] stopped
[2015-12-01 12:54:41,039][INFO ][node ] [Adri Nital] closing ...
[2015-12-01 12:54:41,049][INFO ][node ] [Adri Nital] closed
[2015-12-01 12:54:42,519][DEBUG][bootstrap ] unable to install syscall filter
java.lang.UnsupportedOperationException: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:344)
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:546)
at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:183)
at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99)
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:99)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

[2015-12-01 12:54:42,520][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
[2015-12-01 12:54:42,524][DEBUG][bootstrap                ] java.class.path: /opt/elasticsearch/lib/elasticsearch-2.1.0.jar:/opt/elasticsearch/lib/jackson-dataformat-yaml-2.6.2.jar:/opt/elasticsearch/lib/hppc-0.7.1.jar:/opt/elasticsearch/lib/lucene-analyzers-common-5.3.1.jar:/opt/elasticsearch/lib/t-digest-3.0.jar:/opt/elasticsearch/lib/log4j-1.2.17.jar:/opt/elasticsearch/lib/jts-1.13.jar:/opt/elasticsearch/lib/lucene-highlighter-5.3.1.jar:/opt/elasticsearch/lib/joda-time-2.8.2.jar:/opt/elasticsearch/lib/lucene-memory-5.3.1.jar:/opt/elasticsearch/lib/lucene-backward-codecs-5.3.1.jar:/opt/elasticsearch/lib/lucene-core-5.3.1.jar:/opt/elasticsearch/lib/lucene-expressions-5.3.1.jar:/opt/elasticsearch/lib/jackson-dataformat-smile-2.6.2.jar:/opt/elasticsearch/lib/joda-convert-1.2.jar:/opt/elasticsearch/lib/antlr-runtime-3.5.jar:/opt/elasticsearch/lib/groovy-all-2.4.4-indy.jar:/opt/elasticsearch/lib/lucene-spatial3d-5.3.1.jar:/opt/elasticsearch/lib/spatial4j-0.5.jar:/opt/elasticsearch/lib/jna-4.1.0.jar:/opt/elasticsearch/lib/jackson-dataformat-cbor-2.6.2.jar:/opt/elasticsearch/lib/snakeyaml-1.15.jar:/opt/elasticsearch/lib/lucene-spatial-5.3.1.jar:/opt/elasticsearch/lib/lucene-queryparser-5.3.1.jar:/opt/elasticsearch/lib/lucene-grouping-5.3.1.jar:/opt/elasticsearch/lib/jsr166e-1.1.0.jar:/opt/elasticsearch/lib/asm-commons-4.1.jar:/opt/elasticsearch/lib/commons-cli-1.3.1.jar:/opt/elasticsearch/lib/guava-18.0.jar:/opt/elasticsearch/lib/jackson-core-2.6.2.jar:/opt/elasticsearch/lib/compiler-0.8.13.jar:/opt/elasticsearch/lib/netty-3.10.5.Final.jar:/opt/elasticsearch/lib/HdrHistogram-2.1.6.jar:/opt/elasticsearch/lib/compress-lzf-1.0.2.jar:/opt/elasticsearch/lib/lucene-join-5.3.1.jar:/opt/elasticsearch/lib/lucene-misc-5.3.1.jar:/opt/elasticsearch/lib/asm-4.1.jar:/opt/elasticsearch/lib/elasticsearch-2.1.0.jar:/opt/elasticsearch/lib/apache-log4j-extras-1.2.17.jar:/opt/elasticsearch/lib/lucene-sandbox-5.3.1.jar:/opt/elasticsearch/lib/lucene-queries-5.3.1.jar:/opt/elasticsearch/lib/lucene-suggest-5.3.1.ja
`[2015-12-01 12:54:42,887][DEBUG][env                      ] [Batragon] using node location [[NodePath{path=/data/elasticsearch/adviqo-restore/nodes/0, spins=true}]], local_node_id [0]
[2015-12-01 12:54:42,889][DEBUG][env                      ] [Batragon] node data locations details:
 -> /data/elasticsearch/adviqo-restore/nodes/0, free_space [432.7gb], usable_space [357.8gb], total_space [1.4tb], spins? [possibly], mount [/data/elasticsearch (/dev/mapper/vg_graylogn1502-data_elasticsearch_vol)], type [ext4]
[2015-12-01 12:54:42,905][DEBUG][threadpool               ] [Batragon] creating thread_pool [generic], type [cached], keep_alive [30s]
[2015-12-01 12:54:42,910][DEBUG][threadpool               ] [Batragon] creating thread_pool [index], type [fixed], size [16], queue_size [200]
[2015-12-01 12:54:42,911][DEBUG][threadpool               ] [Batragon] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [32], keep_alive [5m]
[2015-12-01 12:54:42,911][DEBUG][threadpool               ] [Batragon] creating thread_pool [get], type [fixed], size [16], queue_size [1k]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [snapshot], type [scaling], min [1], size [5], keep_alive [5m]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [suggest], type [fixed], size [16], queue_size [1k]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [bulk], type [fixed], size [16], queue_size [50]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [warmer], type [scaling], min [1], size [5], keep_alive [5m]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [flush], type [scaling], min [1], size [5], keep_alive [5m]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [search], type [fixed], size [25], queue_size [1k]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [32], keep_alive [5m]
[2015-12-01 12:54:42,912][DEBUG][threadpool               ] [Batragon] creating thread_pool [listener], type [fixed], size [8], queue_size [null]
[2015-12-01 12:54:42,913][DEBUG][threadpool               ] [Batragon] creating thread_pool [percolate], type [fixed], size [16], queue_size [1k]
[2015-12-01 12:54:42,913][DEBUG][threadpool               ] [Batragon] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
[2015-12-01 12:54:42,913][DEBUG][threadpool               ] [Batragon] creating thread_pool [refresh], type [scaling], min [1], size [8], keep_alive [5m]

Can you upload the full logs please somewhere?

I removed your post as this website sounds like a SPAM site or whatever. Lot of annoying ads pop up.
Please use something like gist.github.com for example.

Oh, sorry for that..

here is a new one https://gist.github.com/anonymous/c2032485eebc0615a7b2

thanks for your time again.

cheers,

Ricardo