The problem is that almost all the fields have been mapped as a string and I would like some fields such as "bytes" to be converted to int and make better graphs in Kibana.
It's possible? How?
Right now I'm doing the opposite direction and reading the elasticserach documentation to understand basic concepts such as cluster, node, index, shard, replica, mapping, etc ...
You can use Logstash's mutate filter to convert the type of one or more fields from strings to integers, which will cause Elasticsearch to index those fields as integers. You can also use an Elasticsearch index template to force such a mapping by setting an explicit mapping and override ES's guesses.
If used river plugin to import svc data to ElasticSearch, is there any way to convert type of string to integer? There is my mapping:
{
"sdkmetrics_csv_data": {
"mappings": {
"csv_type": {
"properties": {
"column1": {
"type": "string"
},
"column2": {
"type": "string"
},
"column3": {
"type": "string"
},
"imported_at": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}
I tried to use the following to change existing type for column2 POST
Thx for help. Since I was experimenting with indexing csv files with river-cvs plugin, in other words it was not critical to delete index created by river-cvs plugin. So just for those who would liketo use river-csv plugin and preserve types: Before indexing data in csv file : 1) create index using Map API, where you could specify types for each column. 2) Then use newly created index when indexing csv file with river-cvs plugin.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.