Format bytes in templates pattern

Hi All,

Can someone explain me how to add the format byte to my index template, because i need each day change each index to specify that OUT_BYTES and IN_BYTES is a byte field, here is my template. Another, all my new template is created at 02:00, i check the server time and is ok a solution for that ?

Thank you in advance for your help

PUT _template/template1
{
"index_patterns" : ["data-"],
"order" : 0,
"settings": {
"index.refresh_interval": "10s"
},
"mappings": {
"_doc": {
"dynamic_templates": [{
"geo_fields": {
"match": "
_IP_LOCATION",
"mapping": {
"type": "geo_point",
"norms": false
}
}
}, {
"ip_fields": {
"match": "IPV4",
"match_mapping_type": "string",
"mapping": {
"type": "ip",
"norms": false
}
}
}, {
"port_fields": {
"match": "*PORT",
"match_mapping_type": "long",
"mapping": {
"type": "integer",
"norms": false
}
}
}, {
"timestamp_fields": {
"match": "*timestamp",
"match_mapping_type": "string",
"mapping": {
"type": "date",
"norms": false,
"format": "D-M-YYYY, HH:mm:ss"
}
}
}, {
"bytes_fields": {
"match": "*BYTES",
"match_mapping_type": "long",
"mapping": {
"type": "integer",
"norms": false
}
}
}, {
"vlan_fields": {
"match": "VLAN",
"match_mapping_type": "long",
"mapping": {
"type": "short",
"norms": false
}
}
}, {
"tos_fields": {
"match": "TOS",
"match_mapping_type": "long",
"mapping": {
"type": "text",
"norms": false
}
}
}, {
"protocol_fields": {
"match": "PROTOCOL",
"match_mapping_type": "long",
"mapping": {
"type": "short",
"norms": false
}
}
}, {
"l7proto_fields": {
"match": "L7_PROTO",
"match_mapping_type": "long",
"mapping": {
"type": "short",
"norms": false
}
}
}, {
"pkts_fields": {
"match": "PKTS",
"match_mapping_type": "long",
"mapping": {
"type": "integer",
"norms": false
}
}
}, {
"ipprotocol_fields": {
"match": "IP_PROTOCOL_VERSION",
"match_mapping_type": "long",
"mapping": {
"type": "short",
"norms": false
}
}
}, {
"strings_as_keywords": {
"match_mapping_type": "string",
"unmatch": "IPV4",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}]
}
}
}

Hey,

I am unsure what you refer to as a bytes field here. I think you are not referring to the byte datatype, as this can only store numbers from -128 to 127.

if you just want to store a byte counter, why aren't you using a long?

I have one suspicion, that you want to have an automatic conversion between a bytecount like 1234 and a human readable version like 12M, but this is something that you have to do in your ingestion layer.

--Alex

Hi Alexander,

Exactly i receve netflow data, and always need to change manually the format to byte and indicate 0,0 b
It is possible directly with Dynamic Template mapping ?

About the long format for bytes ? can you suggest me something else ?

Thank you in advance

oh, it seems that this is rather a kibana issue than an Elasticsearch one due to using formatters. You might want to bring this over the kibana forum (including the super useful screenshots).

If you want to do this at index time with Elasticsearch, you would need to write a second string based field, but I think you might be good with just doing this on kibana for presentation purporses (note, I am everything but a kibana wizard).

Thank you very much Alexander.

I discover elasticsearch and Kibana since 2 months, it's very hard, but now, everything is better (not best :wink: )

Another things :
Our server is actually in development, for the production we will use one elasticsearch VM instance but 2x Raid5 datastore to have a redundant data separate from the elasticsearch VM.

It is possible to add data path in elasticsearch.yml to have the replicas.

I'm confused, because i think that adding 2x data path split data and don't do redundant.

What do you think about the best method ?

If you are only using one instance, you cannot make use of any replicas. They are also useless on the same machine, when this machine goes down, you would loose both.

Hope this helps!

Not exactly not ? if the data is in another datastore in RAID5 than the Linux ES VM ?
I can just reinstall the ES in another VM and but the good path data ?

You think that is better to have two instance of Elasticsearch separate in 2x Linux VM distinct ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.