Kibana not recognizing integer fields

My logstash index's mapping looks like this:

Clearly time_taken is an long integer field. But Kibana is not recognizing this field as a number field and throws me errors like No Compatible Fields: The "logstash-*" index pattern does not contain any of the following field types: number in a range aggregration.

If I choose a terms aggregation,


It will show that all fields except for @time_stamp is a string field.

1 Like

I had the same issue before. The problem was that I set up the Kibana index with the wrong ElasticSearch mappings. Later when I changed the mappings in ES, Kibana remains wrong. I don't know if that could be your issue, but check in Kibana/Settings and see if the types from that index is correct.

2 Likes

Hi I have the same problem as the main post. I also checked the Kibana/Settings and it seems that the long field is actually in conflict state. @RyanD1 would you please explain a bit what you did with mappings and Kibana remains wrong because I think I have hit the same problem

Refreshing the mappings in the index pattern using the "Refresh" should help. If not, maybe try to delete your index pattern and re-add it?

2 Likes

Yeah so when I first setup Kibana, the existing ES index mapping was wrong so Kibana read the wrong settings and copied them. Later I changed ES mapping, but Kibana wasn't "smart" enough to detect it and sync up. What I did was simply delete and re-add the index in Kibana so it will read the right mappings.

2 Likes

Thanks mate! Sorry for the late reply. Let me go through your workaround and will post my results

Yes. Clicking refresh has always worked for me since I tried. Thanks :smile:

I tried to simply delete and re-add the index in Kibana, but didn't work. I also tried to reload the fields (this is what is meant by "Refresh", right?), also didn't work. My situation is such:
I have two fields, let's name them, "status" and "duration", and both of them are in conflict state. Both the fields should be integer, but instead they are in conflict state. Is there any possibility that the events that are flowing from the logstash-forwarders are actually having those fields in different value types (i.e. sometimes as string and sometimes integer)? Or is it a really related to mapping? I am a bit newbie and the mapping was done by some other guy who left.

You can check the actual mapping of the index to check actual types of your fields.

GET YOUR_IP:YOUR_PORT/logstash-xxx/_mapping
1 Like

Thanks mate! the curling told me that the fields are of type "long"

facing same issue, removed all indexes, verified that all files and traces of the files are gone.

here is my mapping

{"my-stuff":{"mappings":{"filedetails":{"properties":{"atime":{"type":"double"},"ctime":{"type":"double"},"gid":{"type":"long"},"mode":{"type":"long"},"mtime":{"type":"double"},"name":{"type":"string"},"size":{"type":"long"},"uid":{"type":"long"}}}}}}

is some other index that is part of logstash effecting this?

I have been unable to force mappings on this, yes i'm a noob.

here is the code generating the json, in python

def insert_record(js):
global es
rec=dict()
doc = json.dumps(js, ensure_ascii=True)
res = es.index(index=INDEX, doc_type='filedetails', body=doc)

def store_file(f):
mode=os.stat(f).st_mode
#nlink=os.stat(f).st_nlink
uid=os.stat(f).st_uid
gid=os.stat(f).st_gid
size=os.stat(f).st_size
atime=os.stat(f).st_atime
mtime=os.stat(f).st_mtime
ctime=os.stat(f).st_ctime
js = {
'name' : f,
'mode' : mode,
'uid' : uid,
'gid' : gid,
'size' : size,
'atime' : atime,
'mtime' : mtime,
'ctime' : ctime
}
insert_record(js)

any feedback about how to get this working? or a working index creation json would be welcome. Not sure why something this simple is causing the issue, long isn't exactly a foreign variable type these days?

I would like to know if Kibana keeps the old fields after refresh? it seems it does and this causes a conflict in my indices. Any idea how to handle this?

Does the integer have to be indexed to be be aggregated in Kibana? My integer property has index: no.

As far as I understand, the conflict occurs when the same field has different types across all the documents. So even if you refresh the fields in Kibana, the field will be in conflict state. Kibana does NOT keep old fields after refresh, it's all by elasticsearch. Only way to deal with it I have found so far is to reindex :frowning:

yes, to be aggregated

Why does a numeric field need to be indexed in order to be aggregated in Kibana? When storing metrics, we don't need to index numeric fields since the only way we access them is via aggregations which don't care about numeric fields being indexed or not.

Here is a related issue https://github.com/elastic/kibana/issues/3650.