DIRTY READs from microsoft sql server

Hy guys,

This was my initial concern:
"I am new to elastic and logstash, I tried importing some data from sql in order to learn some stuff.
I've got a table with 13k rows. After importing it to elastic using logstash (new index etc)I got 287k rows with all kind of weird data - like data would be mixed or tangled somehow. "

After a while i noticed that the problem was in fact dirty reads. WHAT else do I have to do to AVOID dirty reads from database?

context:
local machine :win 10
logstash version:6.3.1
elastic 6.3.0
kibana 6.3.0
jdbc driver for sql version: 4.2
java version: 1.8.0_171

config file for logstash load:

input {
  jdbc {
    jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    jdbc_connection_string => "jdbc:sqlserver://dev-ce:1433;databaseName=db_ads_core;integratedSecurity=false;user=etc;password=etc;"
    jdbc_user => "etc"

    statement => "SELECT 
				   [id]
				  ,[campaign_offer_date_groups_id]
				  ,[campaign_offer_date_id]
				  ,[campaign_id]
				  ,[date_key]
				  ,[group_id]
				  ,[kpi_id]
				  ,[group_kpi_value]
				  ,[kpi_source_id]
			  FROM [dbo].[tbl_ads_groups_kpis_values_history]"
  
  use_column_value => true
  tracking_column_type => "numeric"
  tracking_column => "id"
  }
}

output {
  elasticsearch {
    hosts => ["etc:9200"]
	manage_template => false
	action => "index"
    index => "tbl_ads_groups_kpis_values_history"
	
  }
}

Solved my initial concern.

Logstash does dirty reads from m. sql server...No idea why.
Technically it is not suppose to since i did not specify nolock in the config file in the sql query.
So I was seeing ghosts or unborn children :grinning:

So the next natural question...how do i prevent this?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.