Data type is showing as text value in data view rather than decimal when uploaded using mysql config file by logstash

Hi i create a sql connection using logstash and upload a data but my data is showing as text as shown in figure.

Hi @akshay_bhardwaj Welcome to the communinty.

That is because you have not created a mapping (think schema) and so Elasticsearch created the default one for you and chose the default text and keyword field data types

You should probably take a look at the mapping docs here

Numbers found here

And templates here...

But you could clean up your index and try this..

PUT performance 
{
  "mappings": {
    "properties": {
      "timedifference": {
        "type": "float"
      }
    }
  }
}

Can you please let me know how i change it here my config file

input {

jdbc {

jdbc_connection_string => "*****;"
# The user we wish to execute our statement as
jdbc_user => "*****"
jdbc_password => "********"
# The path to our downloaded jdbc driver
jdbc_driver_library => "********
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
schedule => "* * * * *"
# our query
statement => "SELECT TOP (1000) [client_id]
  ,[client_name]
  ,[tolerance_min]
  ,[tolerance_max]
  ,[TimeFrom]
  ,[TimeTo]
  ,[no_of_persons]
  ,[shift_wise_contracted_hrs] 
  ,[employee_id]
  ,[employee_code]
  ,[employee_name]
  ,[designation]
  ,[INN]
  ,[OUT]
  ,[TimeDifference]
  ,[image_name]
  ,concat('perp-',[id]) uId
  ,'performed' itype
  ,[qrCode_id]
  ,[geo_coordinates]

FROM [GMS_GLOBAL_DEV].[dev].[ContPerfss]"
}
}
filter {
Elasticsearch {

user => logstash_user
password => test1234

}
}
output {

Elasticsearch {
"hosts" => "localhost:9200"
"index" => "check"
"document_id" => "%{uid}"
}
stdout { codec => rubydebug }
}

It is not part of that logstash configuration you need to create a mapping before you start loading data from your database.

The command I showed you you can run it in Kibana -> Dev Tools

The proper Overall Steps are (you went directly to step 2)

  1. Create a Mapping / Index Template (this is the schema for your data in Elasticsearch I linked the docs and gave you a sample)
  2. Configure and Run Logstash
  3. Investigate Data in Discover

You could clean up the data and try the convert filter in logstash to set timedifference to type float but you will need to delete the index and try again.

But in my opinion the best approach is to create a mapping

Thanks Stephen for the solution i done it with your opinion of create mapping.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.