Data type conversion issue

Hi,
I am getting long to float data type conversion error. I am using ELK 6.3.0 version. In logstash I have a sql jdbc configuration, and I am executing a query. But my few columns contains decimal values. And while pushing data to elasticsearch from logstash I am getting data type can not change exception. I am getting float to long issue. I have used mutate and pass "number" as datatype but its not working.
Getting both long to float and float to long both exception.
Please provide solution on this.

Please reply as soon as possible.

Read this and specifically the "Also be patient" part.

It's fine to answer on your own thread after 2 or 3 days (not including weekends) if you don't have an answer.

Hi,
I have used mutate also but still I am facing same issue. First time while pushing data to elasticsearch few records not getting pushed. but after index creation missing records also gets pushed in elasticsearch.
I had 188 records, First time 180 records had pushed and 8 records has exception. can not convert float to long. But after index creation my all data i.e. 188 gets pushed in ES.
Please provide solution on this.

I don't know. May be you are not defining a mapping and the first document indexed is not always the same.

Both documents are same. I am pushing data using same query. But first time my 8 records getting missed. After restarting logstash records not getting missed. I have checked in Oracle. My query giving me 188 records. I have used mutate also . But first time my 8 records getting missed.
Please guide us on this issue.

Hi ajit,

which version of Elasticsearch do you use?

do you use Kibana? If yes then check Management > Index Patterns and choose your Index Pattern. To the right you can choose which field types to display. You should have conflict so choose that one. This should be true even on the index where you see all 188 documents. If you don't have any conflicts, it should be all good, maybe some temporary issue.

Based on my own experience I would guess that Elasticsearch is less sensitive to indexing long as float than the other way around. It would relate to this issue.

I have setup a dead letter queue in Logstash for situations like this. Documents with issues go into the dead letter queue where Logstash can read them again and try to fix the issues.

-AB

ajit
... I have used mutate and pass "number" as datatype but its not working.

Also, if you read Mutate filter plugin | Logstash Reference [6.3] | Elastic

Valid conversion targets are: integer, float, integer_eu, float_eu, string, and boolean.

No number ^^

Hi,
I have used float and double and long also. But first time I am getting issue related to data type conversion i.e. can not convert float to long. I am using elasticsearch, logstash, kibana and version is 6.3.0.
Please provide solution or guide us on this issue. I have 188 records in database but first time only 180 records gets pushed to ES. After restarting logstash with same sql query then I am able to push all 188 records. I have used all datatypes. Below is my logs.

[2018-07-06T12:37:24,629][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {tatus=>400, :action=>["index", {:_id=>nil, :_index=>"bank_crisil_rated", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x10d76091>], :response=>{"index"=>{"_index"=>"bank_crisil_rated", "_type"=>"doc", "_id"=>"urdqbmQBoIEiWn9atfMr", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [total_income] cannot be changed from type [float] to [long]"}}}}

Below is my configuration.

input {
    jdbc {
        jdbc_validate_connection => true
        jdbc_connection_string => "jdbc:oracle:thin:@DEV:8080/CORPS"
        jdbc_user => "ABC"
        jdbc_password => "pass#1234"
        jdbc_driver_library => "/opt/READONLYREST/OSS/logstash-6.3.0/ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"

        statement => "Select Distinct Bm.Crmcompanyid,Cd.Company_Name,
  Bm.Datatype,Bm.Template,
  Bm.Isconsolidated,
  Bm.Periodendson,
  Bm.Crr_Date,
  bm.period_code,
Column1  TOTAL_INCOME  ,
Column10  TOTAL_OPERATING_EXPENDITURE ,
Column9 TOTAL_OTHER_INCOME  
From Banknbfc_Periodmaster_Synm Bm,
  Banknbfc_Perioddata_Synm Bd,
  company_details_mv_synm cd
Where Bm.Period_Code = Bd.Period_Code
And Cd.Company_Code = Bm.Crmcompanyid
and bm.template = 'Bank'
and cd.company_status = 'Active'
"       
 }
}


filter {
    mutate {
    convert => {
      "TOTAL_INCOME" => "float"
      "Capital_Adequacy_Ratio" => "float"
    }
  }
}

output   {
   
    elasticsearch {
    	hosts => "172.11.111.111:9200"
        index => "bank_rated"    
        user => "c-ajitb"
        password => "pass#1234"
        ssl => true
        ssl_certificate_verification => false
        truststore => "/opt/READONLYREST/elasticsearch-6.3.0/config/keystore.jks"
        truststore_password => "readonlyrest"

    
    }
}

I am using kibana also but in management section I have work after successful creation of index but I am getting few records missed first time that is the issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.