The ElasticSearch directory layout

Can anyone provide insights on the files that ElasticSearch creates on the
filesystem? Outside of the Lucene files, what are the significance of the
state, global and other various files. These files constitute the local
gateway, but I was hoping for a better rundown then me scanning the code!

I am trying to analyze an unresolved split-brain scenario where a
single-node restart did not fix the problem. (
https://github.com/elasticsearch/elasticsearch/issues/2488 &
http://elasticsearch-users.115913.n3.nabble.com/Nodes-fail-to-join-cluster-potential-split-brain-scenario-td4030489.html
)

Since a restart did not resolve the issue, the incorrect master state was
obviously stored in the gateway on disk. I am hoping to perhaps leverage
the existing classes and write a simple utility class to parse the state
into a human-readable format (read-only sanity check).

The problem was "fixed" by deleting the data directory and have the shards
replicate themselves after the nodes with incorrect masters/cluster-state
were finally able to be restarted. Would it be possible to delete only the
state or will doing so invalidate the data as well? A few pointers would be
helpful so that I do not have to read all the code involved.

Cheers,

Ivan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hej Ivan, good catch, I hope I can include the parsing of Elasticsearch
files into my skywalker plugin for 0.90 ...

My rough understanding is, if the index is busted, folders on shard
level may be gone or deleted, e.g.

data//nodes//indices///

and if the files can not be read, the ES will ask the gateway to react
with a recovery of the shard.

There are also lock files, only present while runtime, and checksum files.

In the _state files, there is redundant information of the cluster
states, so, if these files are gone, ES will look at other places in the
hope there will be the missing state information. I have to explore the
code to find out hese "other places", how an ES node recovers and
matches cluster states recovered from disk. Right now I would assume,
the master node cluster state simply overrides conflicting information...

Cheers,

Jörg

Am 04.03.13 18:01, schrieb Ivan Brusic:

I am hoping to perhaps leverage the existing classes and write a
simple utility class to parse the state into a human-readable format
(read-only sanity check).

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Mon, Mar 4, 2013 at 12:55 PM, Jörg Prante joergprante@gmail.com wrote:

Hej Ivan, good catch, I hope I can include the parsing of Elasticsearch
files into my skywalker plugin for 0.90 ...

I'm not sure that having two clusters in a red state is a good catch. :slight_smile:

My rough understanding is, if the index is busted, folders on shard level
may be gone or deleted, e.g.

The issue was not that the index was busted (it was not), but the cluster
state was incorrect. This state was persisted to disk and I would love to
have more insight on where exactly is the state. The issue went away by
deleting an index, so it appears that the index (not global) state was
incorrect.

data//nodes/**/indices//<shard-**id>/

and if the files can not be read, the ES will ask the gateway to react
with a recovery of the shard.

I understand the organization of the Lucene shards, it is the gateway
files I am looking for info on. If I leave the Lucene files and translogs
intact, but delete the state, will the gateway recover only the state or
will it invalidate the shard. I am assuming the latter.

There are also lock files, only present while runtime, and checksum files.

In the _state files, there is redundant information of the cluster states,
so, if these files are gone, ES will look at other places in the hope there
will be the missing state information. I have to explore the code to find
out hese "other places", how an ES node recovers and matches cluster states
recovered from disk. Right now I would assume, the master node cluster
state simply overrides conflicting information...

The issue is there are two master nodes in a split-brain cluster. I figured
out the Java classes for most of the state files (IndexMetaData, MetaData),
but I failed to find where the master state is persisted. Only spent a few
minutes looking.

I guess what I am looking for are two things:

  1. Where is the master/routing state persisted?

  2. What classes are responsible for reading the various files? I figured
    most of this part out, but I was hoping for a more authoritative answer.
    This second part would be a great addition to the skywalker plugin, but
    this is data at the file-system level and not discoverable via the API.
    With one cluster meltdown, the node would not even start correctly. Not
    sure if site plugins still worked.

Cheers,

Jörg

Cheers,

Ivan

Am 04.03.13 18:01, schrieb Ivan Brusic:

I am hoping to perhaps leverage the existing classes and write a simple

utility class to parse the state into a human-readable format (read-only
sanity check).

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
.
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

It all depends on the gateway, the cluster state is composed of state
information about indices, settings, and so on.

For the local gateway, I think I found it. It's at the loadState()
method in org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState

In earlier ES version (pre 0.19), the cluster state was somewhere at
data top level directory. kimchy moved the cluster state files down to
the shard level, so shards are more self-contained (for easy rsync'ing)
and also "dangling index" support was added. That is, you can drop file
archives into a node directory and ES will pick up the found indexes
automagically.

I had to copy/paste the state reading code. So I can now locate the
files with the help of org.elasticsearch.env.NodeEnvironment quite
nicely. I plan to implement a small check, for testing if the cluster
state on disk is OK or not.

The format is JSON SMILE. I'm wondering a bit because I can't find a
checksum protection or something for the state files on disk, it all
depends on the success of writing/reading a byte stream to/from disk.
This may fail in extreme low memory situations. But as ES can somehow
recover from a corrupt state file, I wonder if it is necessary to
implement a cluster state repair tool.

Jörg

Am 05.03.2013 02:23, schrieb Ivan Brusic:

The issue is there are two master nodes in a split-brain cluster. I
figured out the Java classes for most of the state files
(IndexMetaData, MetaData), but I failed to find where the master state
is persisted. Only spent a few minutes looking.

I guess what I am looking for are two things:

  1. Where is the master/routing state persisted?

  2. What classes are responsible for reading the various files? I
    figured most of this part out, but I was hoping for a more
    authoritative answer. This second part would be a great addition to
    the skywalker plugin, but this is data at the file-system level and
    not discoverable via the API. With one cluster meltdown, the node
    would not even start correctly. Not sure if site plugins still worked.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.