[solved]Float recognized as a text

Hi everyone!

I'm sending Json data from logstash which are visible on kibana :

{
          "modu" => "LORA",
          "data" => "",
           "ack" => false,
          "freq" => 868.1,
          "codr" => "4/5",
          "opts" => "02",
          "datr" => "SF7BW125",
      "@version" => "1",
          "host" => "192.168.69.241",
          "mhdr" => "8002000000011700",
          "seqn" => 23,
        "appeui" => "00-00-00-00-00-00-00-10",
     "timestamp" => "2017-03-23T12:58:49.727570Z",
       "headers" => {
                   "http_accept" => "text/plain",
                  "content_type" => "application/json",
                  "request_path" => "/Lora/Nemeus",
                  "http_version" => "HTTP/1.1",
               "http_connection" => "close",
        "http_transfer_encoding" => "",
                "request_method" => "PUT",
                     "http_host" => "192.168.69.112:8080",
                   "request_uri" => "/Lora/Nemeus",
                "content_length" => "8"
    },
          "rssi" => -15,
           "cls" => 0,
          "rfch" => 0,
          "tmst" => 2852202491,
           "adr" => false,
        "deveui" => "70-b3-d5-32-60-00-01-e6",
    "@timestamp" => 2017-03-23T12:59:51.044Z,
          "size" => 0,
          "port" => 162,
          "lsnr" => 10.0,
          "chan" => 0
}

Originally, the "lsnr" and "freq" field was a string, but on the logstash conf file I've changed this parameter :

input { http { } }
filter {
  mutate {
    convert => {"lsnr" => "float"}     
    convert => {"freq" => "float"}
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

So on logstash logs as showed above, the data are float values, but once it's sent to kibana lsnr and freq are text again. I've tried to rebuild the index, without success

I've tried to change the type of those two fields on kibana index pattern -> controls-> format but float is not proposed.

I dont know why in logstash the parsing is correct and in kibana those two values are recognized back as text.

Thanks

Hi there, so I think the problem is that when Elasticsearch originally created your index, you sent it documents with those fields formatted as strings. So now ES will map all subsequent documents' fields using that string data type. According to the docs, you need to update this mapping by creating a new index with the correct mappings and reindexing all of your documents there. To do this with your documents, you'll just need to double-check that all documents now have lsnr and freq formatted as floats, and then create a new index with them.

You mentioned you already tried this but it didn't work. Just to double-check: did you create an entirely new index? Are you sure none of the documents you're reindexing have lsnr and freq still formatted as strings? Sorry for the dumb questions, but off the top of my head those are two possibilities that could be the source of the problem.

Thanks,
CJ

1 Like

Hi, thanks for your answer. There is no dump questions, I'm a newbie with kibana :slight_smile:

I not sure of what exactly happened about the index pattern. I've removed the actual, then used the default parameters to "configure an index pattern" based on timestamp, then i've clicked on create.

All my old data are still here including the firsts entries of lsnr and freq that were sent by logstash as string.

To create a new index, should I fill all the index format in the dev tool console? for now I've only filled this :

PUT /lora/nemeus/1
{
"properties": {
"lsnr": {
"type": "float",
"fielddata": true
}
}
}

Thanks,
Nabil

Hi Nabil, you have a couple options here. You can use the ES API directly, e.g. with Dev Tools Or you can use Logstash.

The ES API option

  1. Create a new index with the correct field datatype mapping.
  2. Use the reindex API to reindex the data from the old index to the new index.

The Logstash option

To create a new index, I think you just need to specify a different index name in your Logstash configuration. For example, I think you can use Logstash to migrate from your existing index to a new index, and reformat the fields from string to float.

Does all of this make sense?

Thanks,
CJ

Hi,

I've some things in the dev console and it looks like I've broken kibana :confused:

Many of those commands sent me errors, but the problem is when logstash get a json and send it to elasticsearch, i get this log on elasticsearch

[2017-03-29T10:15:59,560][DEBUG][o.e.a.b.TransportShardBulkAction] [Z2vP2Os] [logstash-2017.03.29][4] failed to execute bulk item (index) index {[logstash-2017.03.29][logs][AVsZIsBINGBSqYJy1mMl], source[{"modu":"LORA","data":"MDEyMzQ1Njc5OA==","ack":false,"freq":868.5,"codr":"4/5","opts":"","datr":"SF7BW125","@version":"1","host":"192.168.69.241","mhdr":"8063356502001900","seqn":25,"appeui":"00-00-00-00-00-00-00-10","timestamp":"2017-03-29T08:14:46.991463Z","headers":{"http_accept":"text/plain","content_type":"application/json","request_path":"/lora/nemeus","http_version":"HTTP/1.1","http_connection":"close","http_transfer_encoding":"","request_method":"PUT","http_host":"192.168.69.112:8080","request_uri":"/lora/nemeus","content_length":"8"},"rssi":-21,"cls":0,"rfch":0,"tmst":1796067979,"adr":false,"deveui":"36-30-38-35-02-65-35-63","@timestamp":"2017-03-29T08:15:59.551Z","size":16,"port":2,"lsnr":8.0,"chan":2}]}
org.elasticsearch.indices.TypeMissingException: type[logs] missing
        at org.elasticsearch.index.mapper.MapperService.documentMapperWithAutoCreate(MapperService.java:638) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.index.shard.IndexShard.docMapper(IndexShard.java:1631) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:510) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:196) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:201) ~[elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:348) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.index(TransportShardBulkAction.java:155) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.handleItem(TransportShardBulkAction.java:134) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:120) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onPrimaryShard(TransportShardBulkAction.java:73) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:76) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnPrimary(TransportWriteAction.java:49) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:914) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:884) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:327) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:262) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:864) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:861) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1652) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:873) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:92) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:279) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:258) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:250) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:610) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:596) [elasticsearch-5.2.2.jar:5.2.2]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-5.2.2.jar:5.2.2]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.IllegalStateException: trying to auto create mapping, but dynamic mapping is disabled
        ... 34 more

I don't know how to fix this problem

Also I would try now the logstash option. To specify a different index name, should I add a field on the output/elasticsearch braces ?

Thanks,

Nabil

and here are the command I've typed on the console :
> PUT /lora/nemeus/1

{
  "mappings": {
    "user": { 
      "_all":       { Preformatted text"enabled": false  }, 
      "properties": {
        "title":    { "type": "text"  }
      }
    }
  },
  "lsnr": { "type": "float" },
  "blogpost": { 
    "_all":       { "enabled": false  }, 
    "properties": { 
      "title":    { "type": "text"  }, 
      "body":     { "type": "text"  }, 
      "user_id":  { "type":   "keyword" },
        "lsnr":     { "type": "float" },
      "created":  {
        "type":   "date", 
        "format": "strict_date_optional_time||epoch_millis"
      }
    }
  }
}
PUT lora_index
{
  "mappings": {
    "_all":       { "enabled": true  },
    "lsnr": { "type": "float" }
  }
}
PUT /lora/nemeus/1
{
    "mappings": {
      "logstash-2017.03.28":{
        "properties": {
          "lsnr":{"type": "float"}
        }
      }
    }
}
PUT data/_settings
{
  "index.mapper.dynamic":true
}
POST _reindex
{
  "source": {
    "index": "lora",
    "type":"nemeus"
  },
  "dest": {
    "index": "new_lora",
    "op_type": "create"
  }
}

if you are just playing around and don't have a huge index already i would just delete it

DELETE /lora and start from scratch.

you tried to put the mappings in a document ( PUT /lora/nemeus/1 means put something in index lora of type nemeus and id 1)

you should PUT /lora to define index mappings, check the link to mappings documentation that cj posted above.

1 Like

Hi,

Indeed, for now i don't have a huge index so I've deleted it and tried to put the mapping as cj's link proposed.

But i still don't receive any data on kibana, when logstash receive a packet it show this log

[2017-03-29T11:16:24,559][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"lora", :_type=>"logs", :_routing=>nil}, 2017-03-29T09:16:24.538Z 192.168.69.241 %{message}], :response=>{"index"=>{"_index"=>"lora", "_type"=>"logs", "_id"=>"AVsZWhBnNGBSqYJy1mM6", "status"=>404, "error"=>{"type"=>"type_missing_exception", "reason"=>"type[logs] missing", "index_uuid"=>"vVU8_Y0HSnioWXJzVxgz-g", "index"=>"lora", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"trying to auto create mapping, but dynamic mapping is disabled"}}}}}
{
          "modu" => "LORA",
          "data" => "yv5FZA==",
           "ack" => false,
          "freq" => 868.3,
          "codr" => "4/5",
          "opts" => "",
          "datr" => "SF7BW125",
      "@version" => "1",
          "host" => "192.168.69.241",
          "mhdr" => "8063356502003300",
          "seqn" => 51,
        "appeui" => "00-00-00-00-00-00-00-10",
     "timestamp" => "2017-03-29T09:15:10.433284Z",
       "headers" => {
                   "http_accept" => "text/plain",
                  "content_type" => "application/json",
                  "request_path" => "/lora/nemeus",
                  "http_version" => "HTTP/1.1",
               "http_connection" => "close",
        "http_transfer_encoding" => "",
                "request_method" => "PUT",
                     "http_host" => "192.168.69.112:8080",
                   "request_uri" => "/lora/nemeus",
                "content_length" => "8"
    },
          "rssi" => -10,
           "cls" => 0,
          "rfch" => 0,
          "tmst" => 532526243,
           "adr" => false,
        "deveui" => "36-30-38-35-02-65-35-63",
    "@timestamp" => 2017-03-29T09:16:24.538Z,
          "size" => 8,
          "port" => 2,
          "lsnr" => 9.5,
          "chan" => 1
}

it says dynamic mapping is disabled but i've enabled it on the kibana console

PUT data/_settings
{
  "index.mapper.dynamic":true
  
}

thanks,
Nabil

can you also paste your mapping ? GET /lora/nemeus/_mapping

i get this :

{
  "lora": {
    "mappings": {
      "nemeus": {
        "properties": {
          "lsnr": {
            "type": "float"
          }
        }
      }
    }
  }
}

this looks ok, (wrap any code in code block with putting three backslases ``` on new line before and after the code like this:

```
my code
```

.... type[logs] missing this is one of the problems ... you have a mapping for type nemeus defined but your logstash config tries to index a type logs

and yeah, it seems your dynamic mapping is disabled. that is enabled by default so you somehow disabled it ...

can you GET /lora/_settings ?

here is what i get

{
"lora": {
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "lora",
"mapper": {
"dynamic": "false"
},
"creation_date": "1490779988687",
"number_of_replicas": "1",
"uuid": "pIRtfsbQQFGmRdXbEgVkYQ",
"version": {
"created": "5020299"
}
}
}
}
}

indeed, the dynamic mapping seems disabled

PUT /lora/_settings
{
  "index.mapper.dynamic":true 
}

I'm still having the same warning message on logstash and elasticsearch event after setting dynamic mapping at true. If I do a get :

{
  "lora": {
    "settings": {
      "index": {
        "number_of_shards": "5",
        "provided_name": "lora",
        "mapper": {
          "dynamic": "true"
        },
        "creation_date": "1490788044439",
        "number_of_replicas": "1",
        "uuid": "IGUUFfkFSrCLFOu-D6I2pg",
        "version": {
          "created": "5020299"
        }
      }
    }
  }
}

what if you go with different index all together ? try changing your index name to lora1 in logstash.

i get the same errors when I'm trying with a different index name...

I think I broke some stuff on the whole configuration, i don't know how to reset the whole thing...

this simple example :

is leading me to the same kind of exeption I have above on elasticsearch. When I write hello world on logstash here is the response i get :

[2017-03-30T10:08:06,343][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.03.30", :_type=>"logs", :_routing=>nil}, 2017-03-30T08:08:06.329Z fabrice-HP-EliteDesk-800-G1-USDT hello world], :response=>{"index"=>{"_index"=>"logstash-2017.03.30", "_type"=>"logs", "_id"=>"AVseQePCuCTn4U0vFdU-", "status"=>404, "error"=>{"type"=>"type_missing_exception", "reason"=>"type[logs] missing", "index_uuid"=>"-8jC5r4jT4ixmd0Eqe2PIw", "index"=>"logstash-2017.03.30", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"trying to auto create mapping, but dynamic mapping is disabled"}}}}}
{
"@timestamp" => 2017-03-30T08:08:06.329Z,
"@version" => "1",
"host" => "fabrice-HP-EliteDesk-800-G1-USDT",
"message" => "hello world"
}

if this is your dev machine i would suggest to just reinstall, might be the fastest way to the solution.

1 Like

Indeed for now it was just a dev machine so I've reinstall the whole thing, and kibana finally casted the "freq" and "lsnr" as float, thanks :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.