We are facing an issue with Elastic Stack and Kabana type mismatch. Kabana is not picking up type as NUMBER rather shows them as string. Here is what we've tried,
Initially used ingest pipeline node grok processor to parse data from CSV file as type "DATA", the index type on Kabana is being shown as Type String.
Changed the values to Type NUMBER (for values which are like 1.0, 0.003 ), the data is not being parsed at all.
Now changed the value to BASE64 , ingest processor node, parses the data, however Kabana still shows them as type string. Tried with new ingest pipeline, new name for the tags, still the values in Kabana are shown as string.
Any help / suggestion please on why sometimes ingest pipeline grok processor are not picking up NUMBER and then Kabana not showing the type as NUMBER..
At which level, the reindex needs to be done. Is it withing ingest grok processor ?
Is Kibana picking up the default data type (which is string), we tried deleting the index and re-creating, however still we get the data type as string.
Is there any pointers or example you can share. Any pointers to specific document will also be helpful.
That should work.
Once you understand what I wrote as an example, apply this to your documents and generate a correct mapping.
Then index your source (whatever it is) in this index myindex or use the reindex API to read from your bad index and index in myindex.
Note that if you have been using the correct type in JSON, that would be even better.
IE: instead of indexing:
We have now removed the indexes completely and added the following ingest grok pattern, which works fine as long as data is available...
{
"description": " test ingest pipeline data",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"^%{Data:abc.timestamp}\,?%{DATA:Count:int}?, \?%{DATA:Average:float}?"
]
}
}
]
However, whenever there is no data, it throws an ingest error as empty string. Do you know if there is better way to specify ignore empty strings (other than the '?'). Also is there way we can turn on debug for ingest pipelines.
The issue we are now facing is on the Kibana front where the Variable Try and ID are shown as Text rather than Numbers.
We did multiple testing after clearing up the indices, and noticed that whenever the values are present, then they are treated as Numbers in Kibana and if the values are not present, I think it is being treated as Null String and in Kibana it is represented as Text. Is there way these variable can always be treated as numbers.
In the GROK processor, we did try representing them as NUMBER rather than DATA, however it throws an exception, which I think NUMBER always expect some values in it.
We are running ELK Version 5.6.0. Do you know if there is way around it..?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.