Changing field types

Hi,

I'm new to ELK and I am testing this solution for storing logs of servers and network assets.

One of the collected logs is from an ASA-5520 and being structured through these grok patterns https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/firewalls

The problem is that almost all the fields have been mapped as a string and I would like some fields such as "bytes" to be converted to int and make better graphs in Kibana.

It's possible? How?

Right now I'm doing the opposite direction and reading the elasticserach documentation to understand basic concepts such as cluster, node, index, shard, replica, mapping, etc ...

Regards,
Emerson

You can use Logstash's mutate filter to convert the type of one or more fields from strings to integers, which will cause Elasticsearch to index those fields as integers. You can also use an Elasticsearch index template to force such a mapping by setting an explicit mapping and override ES's guesses.

If I use mutate from one point, ElasticSearch will re-index all previous instances of that field with the new type?

No, it only affects future messages.

If used river plugin to import svc data to ElasticSearch, is there any way to convert type of string to integer? There is my mapping:
{
"sdkmetrics_csv_data": {
"mappings": {
"csv_type": {
"properties": {
"column1": {
"type": "string"
},
"column2": {
"type": "string"
},
"column3": {
"type": "string"
},
"imported_at": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
}

I tried to use the following to change existing type for column2 POST

{
"csv_type" : {
"properties" : {
"column2" : {"type" : "integer", "store" : "yes"}
}
}

generates error response:
{
"error": "MergeMappingException[Merge failed with failures {[mapper [column2] of different type, current_type [string], merged_type [integer]]}]",
"status": 400
}

No. You need to reindex.

Thx a lot for an answer, could you please be more specific on how I could do re-index in such case. Such as, could I use Mapping API to do it ? Or?

Some ideas here.
Some clients also have this feature IIRC.

There is a reindex plugin. I never tested it though.

Thx for help. Since I was experimenting with indexing csv files with river-cvs plugin, in other words it was not critical to delete index created by river-cvs plugin. So just for those who would liketo use river-csv plugin and preserve types: Before indexing data in csv file : 1) create index using Map API, where you could specify types for each column. 2) Then use newly created index when indexing csv file with river-cvs plugin.