We implemented a solution to push Microsoft SQL Data in form of simple rows from SQL Server to Logstash which will index the data to Elasticsearch. The data (SQL rows and columns) are getting successfully by using a simple jdbc plugin to logstash and logstash indexing the data to elasticsearch. But, everytime we try to pump the data (same data) what its doing is adding duplicate records with same data but "with different _id". SO everytime we pump the data from our client to logstash its keep on duplicating the records and its becoming messier. So we tried to get the @uuid setup (https://www.elastic.co/guide/en/logstash/current/plugins-filters-uuid.html) in place to fix this duplicating issue but that didnt solved the issue. Its keep on duplicating the data with new "_id" & new "@uiid" values.
We tried many combinations of @uuid setups, non of them worked.
One of the blogger said if we can get a column inSQL with name "_id" then logstahs wont create this unique identifier with random values (had to try this)
Any inputs is appreciated here..