Mapping type issues regarding migration from 1.6.0 to 2.2.0


(cto@TCS) #1

Hi,

We have an application where we are storing various device data in Elasticsearch 1.6 for our production environment.

For our system, we have a different type for all the devices in the system. A sample of the mapping is as follows

Sample 1:
{
"device1": {
"properties": {
"attribute": {
"properties": {
"speed": {
"type": "long"
},
//.......
//......

Sample 2:
{
"device2": {
"properties": {
"attribute": {
"properties": {
"speed": {
"type": "string"
},
//.......
//......

Our system is very generic and we do not control the "attribute" type.
But this is creating a huge issue for the migration as different type mapping is no longer allowed. We have huge production data and we are just not able to migrate those data.:disappointed:

Please suggest some way in which we can go ahead with the migration.
Thanks in advance!


(Christian Dahlqvist) #2

Migrating to Elasticsearch 2.x will unfortunately require you to change this and reindex the data.

One way to allow flexible attributes without having mapping conflicts is to introduce a naming convention, which indicates the type of the field. This can e.g. be done through a suffix, e.g. 'speed_i' is indexed as an integer and 'speed_s' as an unanalysed string. This can naturally be extended into other data types. This relies on dynamic mappings and generally requires a default treatment, e.g. unanalysed string mapping, for all fields that do not follow the naming convention.


(cto@TCS) #3

Thanks a lot for the reply :slight_smile:


(system) #4