getting UnavailableShardsException on PUT, and NoShardAvailableActionException on Get


(ryaker) #1

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build our
    Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the limits.conf
info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active : Timeout
waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning properly
from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard available
for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch


(ryaker) #2

When I try a query

http://ec2-23-23-23-246.compute-1.amazonaws.com:9200/_search?pretty=true'%20-d%20'{"query":{"term":{"name":"apple"}}}'

I get

"_shards": {"total": 30,"successful": 25,"failed": 5,"failures": [{"index":
"default","shard": 0,"reason": "No active shards"},{"index": "default","
shard": 1,"reason": "No active shards"},{"index": "default","shard": 2,"
reason": "No active shards"},{"index": "default","shard": 3,"reason": "No
active shards"},{"index": "default","shard": 4,"reason": "No active shards"}
]
},


(ryaker) #3

FYI: Not sure how to tell what version of elastic search I'm running.

But all the issues I am seeing that also have failures "reason": "No active
shards" are all in the last month.

This server was setup via chef last night.

On Wednesday, August 8, 2012 7:51:40 AM UTC-7, Ryaker wrote:

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build
    our Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the limits.conf
info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active : Timeout
waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning properly
from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard
available for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch


(olof) #4

Just in case you didn't consider it, we can connect to your ES instance on
that url. Although I doubt anyone here has malicious intent, you might want
to remove the link if there is anything sensitive in the index.

I connected with elasticsearch-head (that and bigdesk are good monitoring
tools btw) and it seems the index "default" is somehow known by the
cluster, but all the data is missing. There is no status to fetch from it,
but there is metadata (enough to say that there are 5 shards and replica
count 1). Is it present in the filesystem? It should be in the ES data dir
somewhere (data/[clustername]/nodes/0/indices/[indexname] for me in 0.19.4).
You are running 0.16.2 according to the cluster info.

Have you tried restarting ES again? For me, that has sometimes helped to
make it pick up shards.

Den onsdagen den 8:e augusti 2012 kl. 17:52:31 UTC+2 skrev Ryaker:

FYI: Not sure how to tell what version of elastic search I'm running.

But all the issues I am seeing that also have failures "reason": "No
active shards" are all in the last month.

This server was setup via chef last night.

On Wednesday, August 8, 2012 7:51:40 AM UTC-7, Ryaker wrote:

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build
    our Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the
limits.conf info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active : Timeout
waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning
properly from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard
available for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch


(ryaker) #5

Nothing sensitive but I deleted the post with the link.

What should the data look like.

in data/[clustername]/nodes/0/indices/food

I see four folders 0-4 in each of thos is an index folder and a translog
folder

in
data/[clustername]/nodes/0/indices/food/0/index I see

0/indices/food/0/index:
total 7560
-rw-r--r-- 1 root root 14 2012-07-19 18:42 _55_1.del
-rw-r--r-- 1 root root 890461 2012-07-19 01:10 _55.fdt
-rw-r--r-- 1 root root 7364 2012-07-19 01:10 _55.fdx
-rw-r--r-- 1 root root 92 2012-07-19 01:10 _55.fnm
-rw-r--r-- 1 root root 48615 2012-07-19 01:10 _55.frq
-rw-r--r-- 1 root root 5524 2012-07-19 01:10 _55.nrm
-rw-r--r-- 1 root root 66703 2012-07-19 01:10 _55.prx
-rw-r--r-- 1 root root 876 2012-07-19 01:10 _55.tii
-rw-r--r-- 1 root root 61733 2012-07-19 01:10 _55.tis
-rw-r--r-- 1 root root 997 2012-08-07 20:31 _6i.fdt
-rw-r--r-- 1 root root 12 2012-08-07 20:31 _6i.fdx
-rw-r--r-- 1 root root 92 2012-08-07 20:31 _6i.fnm
-rw-r--r-- 1 root root 40 2012-08-07 20:31 _6i.frq
-rw-r--r-- 1 root root 10 2012-08-07 20:31 _6i.nrm
-rw-r--r-- 1 root root 79 2012-08-07 20:31 _6i.prx
-rw-r--r-- 1 root root 35 2012-08-07 20:31 _6i.tii
-rw-r--r-- 1 root root 321 2012-08-07 20:31 _6i.tis
-rw-r--r-- 1 root root 9 2012-08-08 10:12 _6j_1.del
-rw-r--r-- 1 root root 2048 2012-08-08 09:59 _6j.fdt
-rw-r--r-- 1 root root 20 2012-08-08 09:59 _6j.fdx
-rw-r--r-- 1 root root 103 2012-08-08 09:59 _6j.fnm
-rw-r--r-- 1 root root 46 2012-08-08 09:59 _6j.frq
-rw-r--r-- 1 root root 18 2012-08-08 09:59 _6j.nrm
-rw-r--r-- 1 root root 64 2012-08-08 09:59 _6j.prx
-rw-r--r-- 1 root root 35 2012-08-08 09:59 _6j.tii
-rw-r--r-- 1 root root 242 2012-08-08 09:59 _6j.tis
-rw-r--r-- 1 root root 14 2012-06-20 20:10 _a_1.del
-rw-r--r-- 1 root root 2988602 2012-06-20 18:40 _a.fdt
-rw-r--r-- 1 root root 22300 2012-06-20 18:40 _a.fdx
-rw-r--r-- 1 root root 92 2012-06-20 18:40 _a.fnm
-rw-r--r-- 1 root root 220586 2012-06-20 18:40 _a.frq
-rw-r--r-- 1 root root 16726 2012-06-20 18:40 _a.nrm
-rw-r--r-- 1 root root 220337 2012-06-20 18:40 _a.prx
-rw-r--r-- 1 root root 4066 2012-06-20 18:40 _a.tii
-rw-r--r-- 1 root root 303152 2012-06-20 18:40 _a.tis
-rw-r--r-- 1 root root 651 2012-08-08 10:12 _checksums-1344420773785
-rw-r--r-- 1 root root 2283797 2012-06-20 18:40 _l.fdt
-rw-r--r-- 1 root root 18860 2012-06-20 18:40 _l.fdx
-rw-r--r-- 1 root root 92 2012-06-20 18:40 _l.fnm
-rw-r--r-- 1 root root 128741 2012-06-20 18:40 _l.frq
-rw-r--r-- 1 root root 14146 2012-06-20 18:40 _l.nrm
-rw-r--r-- 1 root root 171440 2012-06-20 18:40 _l.prx
-rw-r--r-- 1 root root 1615 2012-06-20 18:40 _l.tii
-rw-r--r-- 1 root root 119432 2012-06-20 18:40 _l.tis
-rw-r--r-- 1 root root 1180 2012-08-08 10:12 segments_4c
-rw-r--r-- 1 root root 20 2012-08-08 10:12 segments.gen
-rw-r--r-- 1 root root 0 2012-08-08 10:43 write.lock

I've restarted the server multiple times.

I assume that 0.16.2 is older then 0.19.4 any idea how I'd upgrade

On Wednesday, August 8, 2012 9:13:53 AM UTC-7, olof wrote:

Just in case you didn't consider it, we can connect to your ES instance on
that url. Although I doubt anyone here has malicious intent, you might want
to remove the link if there is anything sensitive in the index.

I connected with elasticsearch-head (that and bigdesk are good monitoring
tools btw) and it seems the index "default" is somehow known by the
cluster, but all the data is missing. There is no status to fetch from it,
but there is metadata (enough to say that there are 5 shards and replica
count 1). Is it present in the filesystem? It should be in the ES data dir
somewhere (data/[clustername]/nodes/0/indices/[indexname] for me in 0.19.4).
You are running 0.16.2 according to the cluster info.

Have you tried restarting ES again? For me, that has sometimes helped to
make it pick up shards.

Den onsdagen den 8:e augusti 2012 kl. 17:52:31 UTC+2 skrev Ryaker:

FYI: Not sure how to tell what version of elastic search I'm running.

But all the issues I am seeing that also have failures "reason": "No
active shards" are all in the last month.

This server was setup via chef last night.

On Wednesday, August 8, 2012 7:51:40 AM UTC-7, Ryaker wrote:

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build
    our Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the
limits.conf info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active :
Timeout waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning
properly from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard
available for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch


(ryaker) #6

How were you able to tell the version number of elastic search?

On Wednesday, August 8, 2012 9:13:53 AM UTC-7, olof wrote:

Just in case you didn't consider it, we can connect to your ES instance on
that url. Although I doubt anyone here has malicious intent, you might want
to remove the link if there is anything sensitive in the index.

I connected with elasticsearch-head (that and bigdesk are good monitoring
tools btw) and it seems the index "default" is somehow known by the
cluster, but all the data is missing. There is no status to fetch from it,
but there is metadata (enough to say that there are 5 shards and replica
count 1). Is it present in the filesystem? It should be in the ES data dir
somewhere (data/[clustername]/nodes/0/indices/[indexname] for me in 0.19.4).
You are running 0.16.2 according to the cluster info.

Have you tried restarting ES again? For me, that has sometimes helped to
make it pick up shards.

Den onsdagen den 8:e augusti 2012 kl. 17:52:31 UTC+2 skrev Ryaker:

FYI: Not sure how to tell what version of elastic search I'm running.

But all the issues I am seeing that also have failures "reason": "No
active shards" are all in the last month.

This server was setup via chef last night.

On Wednesday, August 8, 2012 7:51:40 AM UTC-7, Ryaker wrote:

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build
    our Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the
limits.conf info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active :
Timeout waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning
properly from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard
available for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch

On Wednesday, August 8, 2012 9:13:53 AM UTC-7, olof wrote:

Just in case you didn't consider it, we can connect to your ES instance on
that url. Although I doubt anyone here has malicious intent, you might want
to remove the link if there is anything sensitive in the index.

I connected with elasticsearch-head (that and bigdesk are good monitoring
tools btw) and it seems the index "default" is somehow known by the
cluster, but all the data is missing. There is no status to fetch from it,
but there is metadata (enough to say that there are 5 shards and replica
count 1). Is it present in the filesystem? It should be in the ES data dir
somewhere (data/[clustername]/nodes/0/indices/[indexname] for me in 0.19.4).
You are running 0.16.2 according to the cluster info.

Have you tried restarting ES again? For me, that has sometimes helped to
make it pick up shards.

Den onsdagen den 8:e augusti 2012 kl. 17:52:31 UTC+2 skrev Ryaker:

FYI: Not sure how to tell what version of elastic search I'm running.

But all the issues I am seeing that also have failures "reason": "No
active shards" are all in the last month.

This server was setup via chef last night.

On Wednesday, August 8, 2012 7:51:40 AM UTC-7, Ryaker wrote:

Yesterday evening our EC2 instance running ElasticSearch became
unreachable. Ping, SSh and elasticsearch were all unreachable even after
multiple reboots.
The solution

  • spawn up a new EC2 instance
  • use chef to provision it as an elasticsearch server (same
    elasticsearch.yml).
  • Point our task server which we use to build the indexes, and build
    our Food, SavedMeal, Recipe, and UserSet(Groups) indexes
  • Point our two web servers which search the index, and add new
    documents at the new server
  • Open the site back up

The first problem I ran into, and seem to have solved (although not sure
how great solution is) was too many open files.
I am running the start-stop-daemon via sudo which is using the
limits.conf info now and has raised limit from 1024.

Search is now snappy and consistent.

However now not able to add documents to the index.

I'm getting
UnavailableShardsException[[default][4] [2] shardIt, [0] active :
Timeout waiting for [1m], request: index {[default][Food][206621],
source[{"id":206621,"name":"French Vanilla Formula 1
(US)","description":"","slug":"french-vanilla-formula-1-us","media_id":null,"user_id":19645,"serving_size":"2
scoops","gram_weight":null,"calories":null,"calories_from_fat":10,"total_fat":1,"saturated_fat":null,"polyunsaturated_fat":null,"monounsaturated_fat":null,"cholesterol":null,"sodium":140,"potassium":210,"total_carbohydrate":13,"dietary_fiber":3,"sugars":9,"protein":9,"vitamin_a":25,"vitamin_c":25,"calcium":8,"iron":15,"vitamin_d":25,"vitamin_e":25,"vitamin_k":0,"thiamin":25,"riboflavin":25,"niacin":25,"vitamin_b6":25,"folic_acid":25,"vitamin_b12":25,"pantothenic_acid":25,"phosphorus":15,"magnesium":10,"zinc":25,"selenium":15,"copper":25,"manganese":null,"kcal":90,"kj":null,"alcohol":null,"caffeine":null,"rank":null,"counter":null,"expert_rank":null,"pantry_counter":null,"brand":"Herbalife","overall_rank":null,"verified":false,"source":"iChange","culture":"en_US","indexable":1}]}]

And if I try and go directly to an indexed item that is returning
properly from the search I get

{"error": "NoShardAvailableActionException[[default][0] No shard
available for [Food#16447]]","status": 500}

This is my elasticsearch.yml

cluster: name: ichange_cluster index: store: type: fs cloud: aws:
access_key: AKIAJ2XMXZ6C4EUYP36Q secret_key:
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 network: host: 0.0.0.0 gateway:
type: local recover_after_nodes: 1 recover_after_time: 5m expected_nodes: 1
path: logs: /var/log/elasticsearch data: /var/lib/elasticsearch work:
/tmp/elasticsearch


(ryaker) #7

/_cluster/health?pretty=true

{"cluster_name": "ichange_cluster","status": "red","timed_out": false,"

number_of_nodes": 1,"number_of_data_nodes": 1,"active_primary_shards":
25,"active_shards": 25,"relocating_shards": 0,"initializing_shards": 0,
"unassigned_shards": 35}


(system) #8