I am reading data from oracle and putting data into elasticsearch through logstash.
However, when reading data from oracle, the value of the field corresponding to the Date type is automatically changed to UTC type, different from the original data.
I know that the date value is managed by UTC in elasticsearch.
The value of the original data is 2020-12-29 00:48: 00, but it automatically changes to 2020-12-28T15: 28: 00.000Z.
Therefore, logstash cannot know the original data value and cannot manipulate the data of filter-plugin.
Is there a way to keep the original data? In addition, change the original data in UTC Can yyyy-MM-dd'T'HH: mm: ss.SSS + 0900" be changed?
My logstash config is
input {
jdbc {
jdbc_validate_connection => true
jdbc_driver_library => "<driver>"
jdbc_driver_class => "Java::oracle.jdbc.OracleDriver"
jdbc_connection_string => "<connection>"
jdbc_user => "<user>"
jdbc_password => "<password>"
jdbc_paging_enabled => true
tracking_column => "unix_ts_in_secs"
tracking_column_type => "numeric"
use_column_value => true
statement => "SELECT * FROM table"
schedule => "*/1 * * * *"
charset => "UTF-8"
enable_metric => false
last_run_metadata_path => "path"
}
}
filter {
mutate {
copy => { "id" => "[@metadata][_id]"}
remove_field => ["id", "@version", "unix_ts_in_secs", "ip_num"]
convert => {
"deleted" => "boolean"
}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["eshost"]
index => "index"
document_id => "%{[@metadata][_id]}"
}
}