Logstash jdbc-input taking long time to load data from Oracle DataBase

I am using ELK stack. I have to load Oracle data to ElasticSearch. There are around 5 million data to load at initial.
Problem: The logstash is very very slow to load data, it loads around 50k in 30 mins. this speed is not at all acceptable. Prior to ELK, I had setup on Solr for same set of full data used to be loaded in 1 hr. I am using same query in both places. Also the machines are same, so cpu/memory are all same.

I tried to tune the logstash by params like worker:4, pipeline:default,jdbc-paging-enabled=true. when I enable jdbc-fetch-size= 50,000 then its slower than default. jdbc_page_size= 200000 (while default is 100000 which is good enough).

Please suggest what else can I try ? Do we have any other plugin ?

ELK version: 7.3.1

Hi @amitgupta welcome to the community.

Did you set / increase the jvm size in jvm.options config file?

Curious what the CPU and RAM of machine?

Perhaps you could post your config files so others can see. Please format them with the </> button above.

Thanks for your time. Appreciate it!
Here the details. Although I changed them many times while tuning. But below will give some idea. Also, on side note that I can load same set of data through Solr in approx 55-60 mins.

JVM.options:
-Xms1g
-Xmx2g --> default is 1gb

-- Even I changed to 4gb, does not improve. I checked CPU and memory graph is stable as medium used. No spikes.

input {
    jdbc {
		jdbc_driver_library => "<somePath>\lib\ojdbc8.jar"
		jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "<URL>"
        jdbc_user => ""
        jdbc_password => ""  
		clean_run=>false
		statement_filepath => "<somePath>\conf\security.sql"        
		tracking_column_type => "timestamp"
		tracking_column => update_date
		use_column_value => true
		record_last_run => true
		last_run_metadata_path =>"<somePath>\conf\logstash_security_lastUpdatedData"
		jdbc_default_timezone => "UTC"
		#jdbc_paging_enabled => true
		#jdbc_page_size => 200000
		jdbc_fetch_size => 5000
		
    }
}
output {
    elasticsearch {
			 index => "<Name>"
			 document_id => "%{<PrimaryKeyFromQuery>}"
             hosts => "http://localhost:9200"
			 doc_as_upsert =>true
			
    }
	#stdout { codec => rubydebug }
}

Logstash.yml
pipeline.workers: 4
pipeline.batch.size: 5000

Yeah logstash should be far faster than you are seeing.

Couple other questions.

Are you running Elasticsearch on the same box?

How is that configured? What is the RAM given to Elasticsearch?

What is the CPU / RAM of the box?

I would suggest replacing the elasticsearch output with

stdout { codec => dots }

to try to find if the bottleneck is logstash or elasticsearch.

I have ELK stack on one folder on my office local machine.
CPU - cores :4
RAM:24 approx

when I run CPU goes to average 34% and memory 40%

I downloaded elasticsearch and logstack as zip and unzipped them. No special changes.
I only modified logstash.yml for pipeline.workers and pipeline.batch.size to tune it.
and jvm.options file to increase memory.

Do you think , there is any other plugin to load Oracle data?
May be logstash is not useful, so I can try any other way. it killed my time 3-4 days :slight_smile:

We want to achive the same performance as we got from Solr (as my company strategic change to Elastic stack) .

So before you give up the default jvm for Elasticsearch is small bump that up to 4GB and try again.

I would also take @Badger advice, if you replace the elasticsearch output with dots... if the dots slow... then logstash... if dots fast then elasticsearch output is slow

We have other customers that successfully use the JDBC plugin with Logstash but I realize time is valuable.

I tried Elasticsearch memory to 4gb, but that gave me OOM on startup. then I kept it 2GB, which worked. But with same setup no improvement. i tried 50k and it took 25-28 mins.

I could not understand what you said. Can you pls explain ?

You are sending data from logstash to elasticsearch and it is slow. That could be because logstash is slow, or because elasticsearch is slow. If you take elasticsearch out of the picture and replace it with a stdout output then it will show you whether the problem is in logstash or elasticsearch.

Thanks, I tried the way you said, but sent data to a file on same directory as logstash.

For 50k records it took 14 mins. so, averaging 100k will take 30 mins. Below log shows it took around 5 mins to fetch the whole query. then it started fetching and writing in log.
current setting with 2GB memory on logstash

jdbc_fetch_size => 10000
pipeline.batch.size: 1000
pipeline.workers: 4
pipeline.batch.delay: 50

Started : 16:19 
...
...
[2020-08-27T 16:24:39,384][INFO ][logstash.outputs.file    ] Opening file {:path=>"/elk/out.log"}
[2020-08-27T 16:24:59,536][INFO ][logstash.outputs.file    ] Closing file /elk/out.log

[2020-08-27T 16:27:10,549][INFO ][logstash.outputs.file    ] Opening file {:path=>"/elk/out.log"}
[2020-08-27T 16:27:29,769][INFO ][logstash.outputs.file    ] Closing file /elk/out.log

[2020-08-27T 16:29:40,892][INFO ][logstash.outputs.file    ] Opening file {:path=>"/elk/out.log"}
[2020-08-27T 16:30:00,158][INFO ][logstash.outputs.file    ] Closing file /elk/out.log

[2020-08-27T 16:32:11,486][INFO ][logstash.outputs.file    ] Opening file {:path=>"/elk/out.log"}
[2020-08-27T 16:32:30,580][INFO ][logstash.outputs.file    ] Closing file /elk/out.log

[2020-08-27T 16:33:33,937][INFO ][logstash.outputs.file    ] Opening file {:path=>"/elk/out.log"}
[2020-08-27T 16:33:35,496][INFO ][logstash.runner          ] Logstash shut down.

--Total time : 14 mins

I have 10+ oracle stream from one logstash server to elk cluster and no problem with response.
it must be because you have everything in your local pc. which might have slow 5600rpm disk.

otherwise your input looks very normal. may be comment out jdbc_fetch_size and try. let the jdbc decide what to do.

Yes, I tried by commenting it. Now the same took 18 mins. that means more. Looks like it made one fetch and then logstash started working on data. setting fetch is reducing sometime atleast. not sure what is a standard speed of logstash? I have 50 fields in one row. and 50000 rows from the query. This is just a sample, actually I have to load 2.2 million such rows. So it will take very large time.

Started at 17:34
    [2020-08-27T 17:35:22,214][INFO ][logstash.outputs.file    ] Opening file {:path=>"../elk/out.log"}
    [2020-08-27T 17:52:37,173][INFO ][logstash.runner          ] Logstash shut down.

I downloaded 110million, 70 fields in three-four days.
about 4.5million a hour. my batch size was 3000, but then it was using more worker thread as system has lot of cpu and huge memory pool.

Hi @amitgupta

I think perhaps there are resource constraints even if does not appear that way. the fact you could not give Elasticsearch even 4GB of RAM indicates that to me.

If I was doing this test I would have elasticsearch on a 16GB host give Elasticsearch 8GB 4CPUs or so.

Elasticsearch want to use as much RAM as it can the normal formula is Elasticsearch JVM should be set to about 1/2 the host RAM.

I would put logstash on on a different 8GB host and give it 4-6GB of RAM and a couple CPUs

50 fields is not huge.

Both if these tools need some non competing CPU and RAM to function.

As @elasticforme says and I agree Logstash should be able to perform at it above your needs / expectation given the right resources... Perhaps it will not meet your expectations if it is constrained, and another tool might need to be considered.

Is that from a single query execution? If so, it is odd. At 16:24:39 logstash starts outputing events. It must have stopped doing so at 16:24:49, causing the stale file cleanup code to trigger at 16:24:59. Between 16:24:49 and 16:27:10 there were no events output!

The reason of this time gap.. I know that it was fetching next set of data.

jdbc_fetch_size=10000 So you will see 4 times it has time gap of around 2 mins.

Then I changed jdbc_fetch_size=20000 and I see the log 2 time with a gap of approx 5 mins.
again changed jdbc_fetch_size=30000 and I see the log 1 time with a gap of approx 8 mins.

So performance is same, total time .. even if I change fetch_size.

Thanks for your help. I will still try to increase logstash memory size. Since now we are not going in Elasticsearch, so that memory is not in picture.

I tried running only logstash , no Kibana nor ES. and set memory to 6GB.
I ran without setting fetch-size, to keep default settings.
I took 100k data and it took more than 1hr.. I see no speed improvement.

Not sure what else shall I try :frowning:

Hi @amitgupta

Couple more thoughts....

Have you looked at the Oracle service / box during the fetches just to make sure everything is OK. I am sure you have but just in case the driver or something is creating undo load etc. or and Index on the Oracle side is missing etc.

Any chance you can try Logstash 7.9.0? As you said you just downloaded and unzipped that should not take long. If you do there are a few different parameters to pass. Perhaps try that writing output to the file and see if you get any better performance.

I am using ojdbc8.jar . should be good for Oracle 11g, which we are using. Index on Oracle are good for Other applications that are connecting to database. so can't really doubt or change them, as its common to many others.
Can't go to 7.9.0 as my company has approved version 7.3.X which I am using :frowning:

I also gave a try to fetch only a small set of columns 5-6 in query, to make my query simple. But