This is a weird one, but hopefully it will make more sense to someone else.
I have the following setup:
Happy 4-node ES cluster. Data streaming via logstash, several daily
rolling indices. In particular I have one index for CPU/mem/disk data
called "hardware-YYYY.MM.DD. I have a Kibana panel that plots the disk
usage % figure from this index for certain hosts over time.
The problem is, on seemingly random indices, Kibana displays nothing, or ~0
values for a certain days index. The data drop-off always occurs at index
rollover time, and querying any amount of time within the "bad" index gets
no results (in kibana). The data is the same, and the mappings for the
good and bad indices are the same. Not sure what my/Kibana's problem is.
Here's an image of a single host's disk over time as a simplified version
of the issue. This is over a 5d period. The plotted value is the max
value of a "usage" field, which is mapped as a long in ES in all indices.
You can see an overall trend of slowly increasing disk space (~30-36%).
The indices for 2014.10.15, 2014.10.17, and 2014.10.20 all are affected.
I have tried all kinds of panel settings and drilling down to a time range
with a single data point on either side of a rollover of a good and bad
index, and it doesn't make a lot of sense.
Actually, just going to go ahead and post a sample query for a very narrow
window encompassing a few data points in good and bad indices (I removed
some trailing '}' but this is from an inspect on the kibana panel):
http://~host~:9200/hardware-2014.10.19,hardware-2014.10.20/_search
{
"facets": {
"83": {
"date_histogram": {
"key_field": "@timestamp",
"value_field": "usage",
"interval": "5m"
},
"global": true,
"facet_filter": {
"fquery": {
"query": {
"filtered": {
"query": {
"query_string": {
"query": "log_type:hardware AND type:diskdata AND
host:~host~"
}
},
"filter": {
"bool": {
"must": [
{
"range": {
"@timestamp": {
"from": 1413762778069,
"to": 1413763441563
}
},
"size": 0
}
The results for this are:
{"took":16,"timed_out":false,"_shards":{"total":10,"successful":10,"failed":0},"hits":{"total":91224,"max_score":0.0,"hits":[]},"facets":{"83":{"_type":"date_histogram","entries":[
{"time":1413762600000,"count":1,"min":35.0,"max":35.0,"total":35.0,"total_count":1,"mean":35.0},
{"time":1413762900000,"count":1,"min":35.0,"max":35.0,"total":35.0,"total_count":1,"mean":35.0},
{"time":1413763200000,"count":1,"min":1.73E-322,"max":1.73E-322,"total":1.73E-322,"total_count":1,"mean":1.73E-322}
]}}}
Note the last one - values very close to zero. If I drill into the actual
document in ES, though, there is no difference between the good and bad
indices.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5da7fa35-949e-4625-b492-514b760377b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.