JDBC duplicate messages - Logstash still created it's own document_id after replaced with other value

Dear supports,

I'm trying to use jdbc plugin to upload SQL server data to Elasticsearch,
I'm using logstash cluster with 2 nodes, this is my config on 2 nodes:
+input {
jdbc {
......
statement => "SELECT * FROM logs WHERE timestart > :sql_last_value order by timestart asc"
tracking_column => "timestart"
use_column_value => true
schedule => "*/1 * * * *"
type => "sql-server"
}
}

+filter {}

+output {
if [type] == "sql-server" {
elasticsearch {
hosts => "localhost:9200"
document_id => "%{executionid}"
}
}
}

To take new values only I used the :sql_last_value, but there's still problem with duplicate message
so I tried to use document_id to track the unique value in my table 'executionid' but the issue went weird when even the _id had been replaced with 'executionid', Elasticsearch still got 2 more same messages with different _id
and same timestamp.

Is there something wrong with my configs? Please help me

1 Like