Waht are the possible datatypes of fileds of an index like keyword, ip , date nanos, etc?
guess if field type is keyword then it is is stored in docvalues so we can sort on it?
regards
shini
Waht are the possible datatypes of fileds of an index like keyword, ip , date nanos, etc?
guess if field type is keyword then it is is stored in docvalues so we can sort on it?
regards
shini
Are you looking for this?
yes sir,
in the below mapping for "ip":"172.17.18.19"
what does "type" : "text"
indicates?
(This definition comes above the "fields" : {
section
"ip" : {
"type" : "text",`
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
thanks and regards
shini
If you don't create a mapping for your fields Elasticsearch will create the mapping the first time it index a field, when doing this it will try to guess what is the data type for that field based in its value.
Tha mapping you having means that Elasticsearch mapped that field as a string and when doing this it will map the field first as a text
field and then map it also as a keyword
, using the fields
object to create a field with the suffix .keyword
.
So, in your case you have a field named ip
which is mapped a text
field and a field named ip.keyword
mapped as a keyword
field.
sir,
So, does this means that finally the value of ip
is stored as keyword
type, itself?
Because there is difference when the value is stored as text
and when the value is stored as keyword
, it will be helpful if we can know how it is finally stored.
As per the document,
you can index strings to both text and keyword fields. However, text field values are analyzed for full-text search while keyword strings are left as-is for filtering and sorting..
thanks and regards
shini
ip
is indexed as a text
.
ip.keyword
is indexed as a keyword
.
ok sir...
then it will be better to modify mappings to suite our needs after it gets created automatically, for efficient management of indexes.
thanks and regards
shiny
.
Actually you want to create the index with the mapping before you start indexing data / documents.
The most common way to do this is with an index template that contains the correct mapping.
i was uploading a sample from my log file using kibana iand generating the index and pipeline and then modifying them according to requirements to capture the actual log data from filebeat.
thanks and regrads
shini
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.