Scientific notation in incoming data

Dear colleagues.
do you know the maximum possible number of fields for index ?
In my case i have over 200 from the fluentd configured but in Kibana indexes i have Fields (45) only visible . Is there some limitation for the number of Fields ?
Also dynamic mapping used.

 "mappings" : {
    "_default_" : {
      "_all" : {"enabled" : true, "omit_norms" : true},
      "dynamic_templates" : [ {
        "ID" : {
          "match" : "ID",
          "mapping" : {
            "type" : "string", "index" : "not_analyzed"
          }
        }
      },
      {
        "Name" : {
          "match" : "Name",
          "mapping" : {
            "type" : "string", "index" : "not_analyzed"
          }
        }
      },
      {
        "ip" : {
          "match" : "ip",
          "mapping" : {
            "type" : "ip", "index" : "not_analyzed"
          }
        }
      },
      {
        "string_fields" : {
          "match" : "*",
          "mapping" : {
            "type" : "float", "index" : "not_analyzed"
          }
        }
      }],
      "properties" : {
        "@timestamp" : { "type" : "date" }
          }
        }
      }
      
    }
  }
}

Hi Dmitriy,

In Kibana, on the Index Patterns page (under Settings), have you tried refreshing your index pattern? That should pick up any fields that are present in your Elasticsearch indices (for the chosen pattern) that are not yet known to Kibana.

Cheers,

Shaunak

Yes i did but same 45.
it seems ELastic limitation ? or kibana ...

Both Elasticsearch and Kibana should be able to handle 200 fields, no problem. Can you do a GET <index name> Elasticsearch API call, where <index name> is a specific index that should contain the 200 fields, and paste the output here?

Ok but what is the max number of fields ?

below is the case with fluentd config like this
keys Time,ID,Name,ip,val_1,val_2,val_3,val_4,.....,val_112

    "dynamic_templates": [
      {
        "ID": {
          "mapping": {
            "index": "not_analyzed",
            "type": "string"
          },
          "match": "ID"
        }
      },
      {
        "Name": {
          "mapping": {
            "index": "not_analyzed",
            "type": "string"
          },
          "match": "Name"
        }
      },
      {
        "ip": {
          "mapping": {
            "index": "not_analyzed",
            "type": "ip"
          },
          "match": "ip"
        }
      },
      {
        "string_fields": {
          "mapping": {
            "index": "not_analyzed",
            "type": "float"
          },
          "match": "*"
        }
      }
    ],
    "properties": {
      "@timestamp": {
        "type": "date",
        "format": "strict_date_optional_time||epoch_millis"
      },
      "ID": {
        "type": "string",
        "index": "not_analyzed"
      },
      "Name": {
        "type": "string",
        "index": "not_analyzed"
      },
      "ip": {
        "type": "ip"
      },
      "val_1": {
        "type": "float"
      },
      "val_10": {
        "type": "float"
      },
      "val_100": {
        "type": "float"
      },
      "val_101": {
        "type": "float"
      },
      "val_102": {
        "type": "float"
      },
      "val_103": {
        "type": "float"
      },
      "val_104": {
        "type": "float"
      },
      "val_105": {
        "type": "float"
      },
      "val_106": {
        "type": "float"
      },
      "val_107": {
        "type": "float"
      },
      "val_108": {
        "type": "float"
      },
      "val_109": {
        "type": "float"
      },
      "val_11": {
        "type": "float"
      },
      "val_110": {
        "type": "float"
      },
      "val_111": {
        "type": "float"
      },
      "val_112": {
        "type": "float"

in case fluentd config:

keys Time,ID,Name,ip,MemUsageAvg(%),MemUsageMin(%),MemUsageMax(%),MemUsageTot(%),MemUsageCnt(count),CpuUsageAvg(%),CpuUsageMin(%),CpuUsageMax(%).......
elastic is empty -why ?

I'm concerned about it, too.

i need to keep 1501 field in one index
example of row data for import via fluentd

head -1 2016-09-01.csv
2016-09-01,121212,ATS,1.1.1.1, 0.0, 0.0, 518400.0, 0.0, 84.61, 88.71, 2266186.8200000008, 25920.0, 0.0, 7.0, 19.0, 680.0, 231982.0, 25920.0, 0.0, 32.72, 43.64, 1036976.8099999999, 25920.0, 0.0, 6.81, 0.0, 117110.0, 9.2340136E7, 281574.0, 172800.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.31, 0.0, 10263.0, 5.4668023E7, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 84.0, 117110.0, 8640.0, 0.0, 16.0, 10263.0, 8640.0, 1000.0, 0.0, 0.0, 0.0, 0.0, 83.1, 21.14, 8640.0, 8640.0, 1.1806565E7, 101250.0, 8.209024E7, 0.0, 0.0, 1.18539071E8, 101250.0, 7.0656471E7, 0.0, 0.0, 65383.0, 657481.0, 0.0, 18098.44, 0.0, 51515.62, 6650451.0, 6.3647587E7, 0.0, 7497565.0, 1.12485174E8, 397260.0, 0.0, 0.0, 0.0, 7.991577E8, 44749.0, 75331.0, 1305765.0, 466560.0, 466560.0, 0.0, 13392.0, 0.0, 46883.2, 0.0, 73264.0, 9.6827076E7, 5.3476308E7, 7197944.0, 0.0, 18080.0, 5.3341521E7, 1.11585122E8, 0.0, 56624.0, 9.1094716E7, 4665600.0, 4665600.0, 0.0, 0.0, 0.0, 0.0, 0.0

Now i know that 1501 fields in same index are no problem.
But in my case the problem is scientific notation in incoming data .
Elastic fully ignore document with even one scientific notation value, for example :

,0.0,0.0,0.0,1.1933174e+7,518400.0,0.0,1.1403393e+7,518400.0,0.0,19971.900000000005
all values are "type": "float" is it correct ?

i don not know how to avoid scientific notation
?

maybe someone know how to check it ?

    "properties": {
      "@timestamp": {
        "type": "date",
        "format": "strict_date_optional_time||epoch_millis"
      },
      "first (count)": {
        "type": "long"
      }
    }

PUT /pm-1/ats/3
{
"@timestamp" : "2016-09-07",
"first (count)": "2e+3"
}

{
   "error": {
      "root_cause": [
         {
            "type": "mapper_parsing_exception",
            "reason": "failed to parse [first (count)]"
         }
      ],
      "type": "mapper_parsing_exception",
      "reason": "failed to parse [first (count)]",
      "caused_by": {
         "type": "number_format_exception",
         "reason": "For input string: \"**2e+3**\""
      }
   },
   "status": 400
}

somebody know the solution ?

in my understanding i have to prepare incoming values by using bash...
i mean recalculate each value 2e+3 = 2000

No quotes?

POST /test/doc
{ 
	"@timestamp" : "2016-09-07",
	"first (count)": 2e+3
}

Thank you! )) this quotes made my week . w/o quotes e_notation working!
FYI : but normal notation is working with quotes also....

{   
    "@timestamp" : "2016-09-07",
    "first (count)": "9999"
}