Indices lost after system bounce - MissingIndexException

Hello folks,
I'm running a mac osx 10.6 or so and we are using elastic search {elasticsearch/0.9.0}[211]: started

We are programmatically building 2 indexes with (sub) types via groovy client (and the server has been started separately)
Everything is hunky dory while the system is running
I can shut it the terminal window down with ctrl-c or passing the shutdown all curl command and restart using elasticsearch -f

The problem is when I bounce my system and restart the server it indexes are missing
I believe the gateway is set to FS, but the store is set to memory

After bounce I do see some stuff in my work directory
/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0/indices/backstage/1/index
GIDEON-KAPLANs-MacBook-Pro:index gideon$ ls
_0.cfs _0_1.del segments.gen segments_2

But it's looks like it's missing the indices directory in the gateway fs store /tmp/elasticsearch/data/cluster
after system bounce (the below dir is recreated when I rebuild the indexes programatically)
/tmp/elasticsearch/data/cluster/Ticketfly-1/indices

This is before rebuilding
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-0
{
"meta-data" : {
"indices" : {
}
}
}GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ pwd
/tmp/elasticsearch/data/cluster/Ticketfly-1/metadata

This is after
GIDEON-KAPLANs-MacBook-Pro:metadata gideon$ cat metadata-5 (metadata-5 is the only file there)
{
"meta-data" : {
"indices" : {
"inventory" : {
"settings" : {
"index.number_of_shards" : "5",
"index.number_of_replicas" : "4"
},
"mappings" : [ {
.....

I do see them on the file system though, maybe I'm just doing something wrong or is there a way to tell it to rediscover the existing indexes on startup or not delete them on shutdown (what I think it's doing)

Does one have to set the store to fs in the yml file?

Sorry if this is a dumb question, I'm a bit new at using this tech

my yml file
name: SearchDevelopmentInstance

cluster:
name: Ticketfly-1

node:
data: true

http:
enabled: true

network:
#bind_host: 0.0.0.0
#publish_host: eth1
host: 127.0.0.1

gateway:
type: fs
fs:
location: /tmp/elasticsearch/data/cluster
index :
number_of_shards : 5
number_of_replicas : 4
analysis :
analyzer :
standard :
type : standard

store:
type: memory
memory:
cache_size: 100m
buffer_size: 10k

#transport:

tcp:

port: 9300

startup after bounce
GIDEON-KAPLANs-MacBook-Pro:bin gideon$ elasticsearch -f
[11:41:56,616][INFO ][node ] [SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initializing ...
[11:41:56,635][INFO ][plugins ] [SearchDevelopmentInstance] loaded []
[11:41:58,224][DEBUG][cache.memory ] [SearchDevelopmentInstance] using bytebuffer cache with buffer_size [100kb], cache_size [200mb], direct [true], warm_cache [false]
[11:41:58,272][DEBUG][threadpool.cached ] [SearchDevelopmentInstance] Initializing cached thread pool with keep_alive[1m], scheduled_size[20]
[11:41:58,317][DEBUG][discovery.zen.ping.multicast] [SearchDevelopmentInstance] using group [224.2.2.4], with port [54328], ttl [3], and address [null]
[11:41:58,323][DEBUG][discovery.zen.ping.unicast] [SearchDevelopmentInstance] using initial hosts []
[11:41:58,333][DEBUG][discovery.zen ] [SearchDevelopmentInstance] using initial_ping_timeout [3s]
[11:41:58,335][DEBUG][discovery.zen.fd ] [SearchDevelopmentInstance] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[11:41:58,342][DEBUG][discovery.zen.fd ] [SearchDevelopmentInstance] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[11:41:58,410][DEBUG][env ] [SearchDevelopmentInstance] using node location [/usr/local/elasticsearch-0.9.0/work/Ticketfly-1/nodes/0]
[11:41:58,504][DEBUG][monitor.memory.alpha ] [SearchDevelopmentInstance] interval [500ms], upper_memory_threshold [0.95], lower_memory_threshold [0.8], translog_number_of_operations_threshold [5000]
[11:41:58,536][DEBUG][monitor.network ] [SearchDevelopmentInstance] net_info
host [GIDEON-KAPLANs-MacBook-Pro.local]
en1 display_name [en1]
address [/10.59.15.40] [/fe80:0:0:0:cabc:c8ff:fed9:a56%5]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
en0 display_name [en0]
address [/10.56.10.139] [/fe80:0:0:0:cabc:c8ff:fe96:9069%4]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo0 display_name [lo0]
address [/127.0.0.1] [/fe80:0:0:0:0:0:0:1%1] [/0:0:0:0:0:0:0:1]
mtu [16384] multicast [true] ptp [false] loopback [true] up [true] virtual [false]

[11:41:58,626][DEBUG][indices.recovery.throttler] [SearchDevelopmentInstance] concurrent_recoveries [4], concurrent_streams [4] interval [100ms]
[11:41:58,631][DEBUG][indices.memory ] [SearchDevelopmentInstance] using index_buffer_size [406.2mb], with min_shard_index_buffer_size [4mb]
[11:41:58,636][DEBUG][gateway.fs ] [SearchDevelopmentInstance] Latest metadata found at index [-1]
[11:41:58,637][INFO ][node ] [SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: initialized
[11:41:58,637][INFO ][node ] [SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: starting ...
[11:41:58,698][DEBUG][netty.channel.socket.nio.NioProviderMetadata] Using the autodetected NIO constraint level: 0
[11:41:58,740][DEBUG][transport.netty ] [SearchDevelopmentInstance] Bound to address [/127.0.0.1:9300]
[11:41:58,741][INFO ][transport ] [SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
[11:42:01,865][DEBUG][discovery.zen ] [SearchDevelopmentInstance] ping responses: {none}
[11:42:01,870][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [zen-disco-join (elected_as_master)]: execute
[11:42:01,871][DEBUG][cluster.service ] [SearchDevelopmentInstance] cluster state updated, version [1], source [zen-disco-join (elected_as_master)]
[11:42:01,872][INFO ][cluster.service ] [SearchDevelopmentInstance] new_master [SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]], reason: zen-disco-join (elected_as_master)
[11:42:02,014][DEBUG][transport.netty ] [SearchDevelopmentInstance] Connected to node [[SearchDevelopmentInstance][3d1d41e1-4846-4024-8738-230fbd6927b0][inet[/127.0.0.1:9300]]]
[11:42:02,016][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [zen-disco-join (elected_as_master)]: done applying updated cluster_state
[11:42:02,016][INFO ][discovery ] [SearchDevelopmentInstance] Ticketfly-1/3d1d41e1-4846-4024-8738-230fbd6927b0
[11:42:02,016][DEBUG][gateway ] [SearchDevelopmentInstance] reading state from gateway fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,017][DEBUG][gateway ] [SearchDevelopmentInstance] read state from gateway fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 0s
[11:42:02,017][DEBUG][gateway ] [SearchDevelopmentInstance] no state read from gateway
[11:42:02,018][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [gateway (marked as read, reason=no state)]: execute
[11:42:02,019][DEBUG][cluster.service ] [SearchDevelopmentInstance] cluster state updated, version [2], source [gateway (marked as read, reason=no state)]
[11:42:02,020][DEBUG][gateway ] [SearchDevelopmentInstance] writing to gateway fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1] ...
[11:42:02,020][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [gateway (marked as read, reason=no state)]: done applying updated cluster_state
[11:42:02,023][INFO ][http ] [SearchDevelopmentInstance] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/127.0.0.1:9200]}
[11:42:02,172][DEBUG][gateway ] [SearchDevelopmentInstance] wrote to gateway fs:///tmp/elasticsearch/data/cluster/[Ticketfly-1], took 152ms
[11:42:02,236][INFO ][jmx ] [SearchDevelopmentInstance] bound_address {service:jmx:rmi:///jndi/rmi://:9400/jmxrmi}, publish_address {service:jmx:rmi:///jndi/rmi://127.0.0.1:9400/jmxrmi}
[11:42:02,236][INFO ][node ] [SearchDevelopmentInstance] {elasticsearch/0.9.0}[211]: started
[11:42:12,016][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [routing-table-updater]: execute
[11:42:12,018][DEBUG][cluster.service ] [SearchDevelopmentInstance] processing [routing-table-updater]: no change in cluster_state

query to get a document I know what there before shutdown

GIDEON-KAPLANs-MacBook-Pro:~ gideon$ curl -XGET 'http://localhost:9200/backstage/event/11637'
{"error":"IndexMissingException[[backstage] missing]"}GIDEON-KAPLANs-MacBook-Pro: