Indices lost after system bounce - MissingIndexException

Hello folks
I'm running a mac osx 10.6.4 and we are using elastic search
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded []
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts []
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},
publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [1], source
[zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ]
[SearchDevelopmentInstance] new_master [SearchDevelopmentInstance]
[3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason:
zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Connected to node
[[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0]
[inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ]
[SearchDevelopmentInstance]
Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway ]
[SearchDevelopmentInstance] reading state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] read state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 0s
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] no state read from gateway
[11:42:02,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: execute
[11:42:02,019][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [2], source
[gateway (marked as read, reason=no state)]
[11:42:02,020][DEBUG][gateway ]
[SearchDevelopmentInstance] writing to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,020][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: done applying updated cluster_state
[11:42:02,023][INFO ][http ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9200]},
publish_address {inet[/127.0.0.1:9200]}
[11:42:02,172][DEBUG][gateway ]
[SearchDevelopmentInstance] wrote to gateway fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1],
took 152ms
[11:42:02,236][INFO ][jmx ]
[SearchDevelopmentInstance] bound_address {service:jmx:rmi:///jndi/
rmi://:9400/jmxrmi}, publish_address {service:jmx:rmi:///jndi/rmi://
127.0.0.1:9400/jmxrmi}
[11:42:02,236][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: started
[11:42:12,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]:
execute
[11:42:12,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]: no
change in cluster_state

query to get a document I know was there before shutdown

GIDEON-KAPLANs-MacBook-Pro:~ gideon$ curl -XGET 'http://localhost:9200/
backstage/event/11637'
{"error":"IndexMissingException[[backstage] missing]"}GIDEON-KAPLANs-
MacBook-Pro:

Hi,

First, can you try and use the latest version (0.12)? It will be simpler
to support this.

You have to use the shared gateway (fs) if you are going to store the
index in memory. If not, I recommend starting with the loca gateway (which
is the default out of the box in 0.12). The index local storage (not
gateway) defaults to file system, so you can just remove the store settings
in the index.

A few points:

  1. If you are going to store the index in memory, make sure you have enough
    of it.
  2. File system index can be pretty fast, make sure you really need the index
    to be in memory for the extra juice.

Also, post shutdown, can you check if the indices are listed in the
metadata file?

-shay.banon

On Tue, Oct 26, 2010 at 9:56 PM, Gideon Kaplan gideonkaplan@gmail.comwrote:

Hello folks
I'm running a mac osx 10.6.4 and we are using Elasticsearch
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},
publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [1], source
[zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ]
[SearchDevelopmentInstance] new_master [SearchDevelopmentInstance]
[3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason:
zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Connected to node
[[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0]
[inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ]
[SearchDevelopmentInstance]
Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway ]
[SearchDevelopmentInstance] reading state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] read state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 0s
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] no state read from gateway
[11:42:02,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: execute
[11:42:02,019][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [2], source
[gateway (marked as read, reason=no state)]
[11:42:02,020][DEBUG][gateway ]
[SearchDevelopmentInstance] writing to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,020][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: done applying updated cluster_state
[11:42:02,023][INFO ][http ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9200]},
publish_address {inet[/127.0.0.1:9200]}
[11:42:02,172][DEBUG][gateway ]
[SearchDevelopmentInstance] wrote to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1],
took 152ms
[11:42:02,236][INFO ][jmx ]
[SearchDevelopmentInstance] bound_address {service:jmx:rmi:///jndi/
rmi://:9400/jmxrmi}, publish_address {service:jmx:rmi:///jndi/rmi://
127.0.0.1:9400/jmxrmi}
[11:42:02,236][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: started
[11:42:12,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]:
execute
[11:42:12,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]: no
change in cluster_state

query to get a document I know was there before shutdown

GIDEON-KAPLANs-MacBook-Pro:~ gideon$ curl -XGET 'http://localhost:9200/
backstage/event/11637'
{"error":"IndexMissingException[[backstage] missing]"}GIDEON-KAPLANs-
MacBook-Pro:

Thanks Shay,
I will try the steps you mentioned and update with status (still
wrangling my head around the multiple plugins we use here and will
need to update some settings / configs to do so)
I also need to get a better understanding of the underlying lucene,
specifically gateways and stores
From the config I posted, having the store as type memory, then a
"server" bounce will result in loss of the indexes is the expected
behavior?
That's the result I see when restarting my entire computer, not just
elasticsearch (If I just bounce ES, then the indexes come back)
Thanks for the speedy response
G

On Oct 26, 2:18 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

First, can you try and use the latest version (0.12)? It will be simpler
to support this.

You have to use the shared gateway (fs) if you are going to store the
index in memory. If not, I recommend starting with the loca gateway (which
is the default out of the box in 0.12). The index local storage (not
gateway) defaults to file system, so you can just remove the store settings
in the index.

A few points:

  1. If you are going to store the index in memory, make sure you have enough
    of it.
  2. File system index can be pretty fast, make sure you really need the index
    to be in memory for the extra juice.

Also, post shutdown, can you check if the indices are listed in the
metadata file?

-shay.banon

On Tue, Oct 26, 2010 at 9:56 PM, Gideon Kaplan gideonkap...@gmail.comwrote:

Hello folks
I'm running a mac osx 10.6.4 and we are using Elasticsearch
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},
publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [1], source
[zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ]
[SearchDevelopmentInstance] new_master [SearchDevelopmentInstance]
[3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason:
zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Connected to node
[[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0]
[inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ]
[SearchDevelopmentInstance]
Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway ]
[SearchDevelopmentInstance] reading state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] read state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 0s
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] no state read from gateway
[11:42:02,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: execute
[11:42:02,019][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [2], source
[gateway (marked as read, reason=no state)]
[11:42:02,020][DEBUG][gateway ]
[SearchDevelopmentInstance] writing to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,020][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: done applying updated cluster_state
[11:42:02,023][INFO ][http ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9200]},
publish_address {inet[/127.0.0.1:9200]}
[11:42:02,172][DEBUG][gateway ]
[SearchDevelopmentInstance] wrote to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1],
took 152ms
[11:42:02,236][INFO ][jmx ]
[SearchDevelopmentInstance] bound_address {service:jmx:rmi:///jndi/
rmi://:9400/jmxrmi}, publish_address {service:jmx:rmi:///jndi/rmi://
127.0.0.1:9400/jmxrmi}
[11:42:02,236][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: started
[11:42:12,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]:
execute
[11:42:12,018][DEBUG][cluster.service ]

...

read more »

If you have the index store in memory, then you have to have a shared
gateway configured (like the fs one). This will survive both ES restart and
machine restart (its the same for ES). Assuming you are not deleting the
data from the gateway.

-shay.banon

On Wed, Oct 27, 2010 at 9:45 PM, Gideon Kaplan gideonkaplan@gmail.comwrote:

Thanks Shay,
I will try the steps you mentioned and update with status (still
wrangling my head around the multiple plugins we use here and will
need to update some settings / configs to do so)
I also need to get a better understanding of the underlying lucene,
specifically gateways and stores
From the config I posted, having the store as type memory, then a
"server" bounce will result in loss of the indexes is the expected
behavior?
That's the result I see when restarting my entire computer, not just
elasticsearch (If I just bounce ES, then the indexes come back)
Thanks for the speedy response
G

On Oct 26, 2:18 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

First, can you try and use the latest version (0.12)? It will be
simpler
to support this.

You have to use the shared gateway (fs) if you are going to store the
index in memory. If not, I recommend starting with the loca gateway
(which
is the default out of the box in 0.12). The index local storage (not
gateway) defaults to file system, so you can just remove the store
settings
in the index.

A few points:

  1. If you are going to store the index in memory, make sure you have
    enough
    of it.
  2. File system index can be pretty fast, make sure you really need the
    index
    to be in memory for the extra juice.

Also, post shutdown, can you check if the indices are listed in the
metadata file?

-shay.banon

On Tue, Oct 26, 2010 at 9:56 PM, Gideon Kaplan <gideonkap...@gmail.com
wrote:

Hello folks
I'm running a mac osx 10.6.4 and we are using Elasticsearch
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},
publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [1], source
[zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ]
[SearchDevelopmentInstance] new_master [SearchDevelopmentInstance]
[3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason:
zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Connected to node
[[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0]
[inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ]
[SearchDevelopmentInstance]
Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway ]
[SearchDevelopmentInstance] reading state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] read state from gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 0s
[11:42:02,017][DEBUG][gateway ]
[SearchDevelopmentInstance] no state read from gateway
[11:42:02,018][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: execute
[11:42:02,019][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [2], source
[gateway (marked as read, reason=no state)]
[11:42:02,020][DEBUG][gateway ]
[SearchDevelopmentInstance] writing to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,020][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [gateway (marked as read,
reason=no state)]: done applying updated cluster_state
[11:42:02,023][INFO ][http ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9200]},
publish_address {inet[/127.0.0.1:9200]}
[11:42:02,172][DEBUG][gateway ]
[SearchDevelopmentInstance] wrote to gateway
fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1],
took 152ms
[11:42:02,236][INFO ][jmx ]
[SearchDevelopmentInstance] bound_address {service:jmx:rmi:///jndi/
rmi://:9400/jmxrmi}, publish_address {service:jmx:rmi:///jndi/rmi://
127.0.0.1:9400/jmxrmi}
[11:42:02,236][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: started
[11:42:12,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [routing-table-updater]:
execute
[11:42:12,018][DEBUG][cluster.service ]

...

read more »

hmm,
That's not the behavior Im seeing when doing a computer restart, It's
only durable if I bounce ES, but leave the computer up, maybe because
its configured to us a sub dir or /tmp? But I do see the other files
persists
Will try and update the plugin first with the same config , ie a
memory-based store and see if that fixes it
Maybe theres also some functionality the deletes the indeces on
shutdown that I am not aware of

On Oct 27, 1:33 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

If you have the index store in memory, then you have to have a shared
gateway configured (like the fs one). This will survive both ES restart and
machine restart (its the same for ES). Assuming you are not deleting the
data from the gateway.

-shay.banon

On Wed, Oct 27, 2010 at 9:45 PM, Gideon Kaplan gideonkap...@gmail.comwrote:

Thanks Shay,
I will try the steps you mentioned and update with status (still
wrangling my head around the multiple plugins we use here and will
need to update some settings / configs to do so)
I also need to get a better understanding of the underlying lucene,
specifically gateways and stores
From the config I posted, having the store as type memory, then a
"server" bounce will result in loss of the indexes is the expected
behavior?
That's the result I see when restarting my entire computer, not just
elasticsearch (If I just bounce ES, then the indexes come back)
Thanks for the speedy response
G

On Oct 26, 2:18 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

First, can you try and use the latest version (0.12)? It will be
simpler
to support this.

You have to use the shared gateway (fs) if you are going to store the
index in memory. If not, I recommend starting with the loca gateway
(which
is the default out of the box in 0.12). The index local storage (not
gateway) defaults to file system, so you can just remove the store
settings
in the index.

A few points:

  1. If you are going to store the index in memory, make sure you have
    enough
    of it.
  2. File system index can be pretty fast, make sure you really need the
    index
    to be in memory for the extra juice.

Also, post shutdown, can you check if the indices are listed in the
metadata file?

-shay.banon

On Tue, Oct 26, 2010 at 9:56 PM, Gideon Kaplan <gideonkap...@gmail.com
wrote:

Hello folks
I'm running a mac osx 10.6.4 and we are using Elasticsearch
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},
publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ]
[SearchDevelopmentInstance] cluster state updated, version [1], source
[zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ]
[SearchDevelopmentInstance] new_master [SearchDevelopmentInstance]
[3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason:
zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Connected to node
[[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0]
[inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ]
[SearchDevelopmentInstance] processing [zen-disco-join
(elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ]
[SearchDevelopmentInstance]
Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway

...

read more »

Hi Again
Think I found out why the indices were lost, was setting the gateway
file location to a subdir of /tmp on mac, which apparently gets
deleted on some schedule

changing the yml config to seem to allow it survive a computer restart

old config
gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster

new config
gateway:
type: fs
fs:
location: /usr/local/elasticsearch-0.9.0/data/cluster

This is one article I found about deleting /tmp on mac os x
http://efreedom.com/Question/3-187071/Mac-OS-106-Often-Tmp-Deleted

On Oct 27, 2:51 pm, Gideon Kaplan gideonkap...@gmail.com wrote:

hmm,
That's not the behavior Im seeing when doing a computer restart, It's
only durable if I bounce ES, but leave the computer up, maybe because
its configured to us a sub dir or /tmp? But I do see the other files
persists
Will try and update the plugin first with the same config , ie a
memory-based store and see if that fixes it
Maybe theres also some functionality the deletes the indeces on
shutdown that I am not aware of

On Oct 27, 1:33 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

If you have the index store in memory, then you have to have a shared
gateway configured (like the fs one). This will survive both ES restart and
machine restart (its the same for ES). Assuming you are not deleting the
data from the gateway.

-shay.banon

On Wed, Oct 27, 2010 at 9:45 PM, Gideon Kaplan gideonkap...@gmail.comwrote:

Thanks Shay,
I will try the steps you mentioned and update with status (still
wrangling my head around the multiple plugins we use here and will
need to update some settings / configs to do so)
I also need to get a better understanding of the underlying lucene,
specifically gateways and stores
From the config I posted, having the store as type memory, then a
"server" bounce will result in loss of the indexes is the expected
behavior?
That's the result I see when restarting my entire computer, not just
elasticsearch (If I just bounce ES, then the indexes come back)
Thanks for the speedy response
G

On Oct 26, 2:18 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

Hi,

First, can you try and use the latest version (0.12)? It will be
simpler
to support this.

You have to use the shared gateway (fs) if you are going to store the
index in memory. If not, I recommend starting with the loca gateway
(which
is the default out of the box in 0.12). The index local storage (not
gateway) defaults to file system, so you can just remove the store
settings
in the index.

A few points:

  1. If you are going to store the index in memory, make sure you have
    enough
    of it.
  2. File system index can be pretty fast, make sure you really need the
    index
    to be in memory for the extra juice.

Also, post shutdown, can you check if the indices are listed in the
metadata file?

-shay.banon

On Tue, Oct 26, 2010 at 9:56 PM, Gideon Kaplan <gideonkap...@gmail.com
wrote:

Hello folks
I'm running a mac osx 10.6.4 and we are using Elasticsearch
{elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy
client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the
shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it
indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/
backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway
fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the
indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5
is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing
something wrong or is there a way to tell it to rediscover the
existing indexes on startup or not delete them on shutdown (what I
think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]:
initializing ...
[11:41:56,635][INFO ][plugins ]
[SearchDevelopmentInstance] loaded
[11:41:58,224][DEBUG][cache.memory ]
[SearchDevelopmentInstance] using bytebuffer cache with buffer_size
[100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ]
[SearchDevelopmentInstance] Initializing cached thread pool with
keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast]
[SearchDevelopmentInstance] using group [224.2.2.4], with port
[54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast]
[SearchDevelopmentInstance] using initial hosts
[11:41:58,333][DEBUG][discovery.zen ]
[SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [master] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ]
[SearchDevelopmentInstance] [node ] uses ping_interval [1s],
ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ]
[SearchDevelopmentInstance] using node location [/usr/local/
elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ]
[SearchDevelopmentInstance] interval [500ms], upper_memory_threshold
[0.95], lower_memory_threshold [0.8],
translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ]
[SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/
fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/
fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/
0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback
[true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler]
[SearchDevelopmentInstance] concurrent_recoveries [4],
concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ]
[SearchDevelopmentInstance] using index_buffer_size [406.2mb], with
min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ]
[SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ]
[SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata]
Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ]
[SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ]
[SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]},

...

read more »