Trying to get the data from oracle database through logstash but data is not coming to elasticsearch

I am trying to get the data of oracle database through logstash but data is not coming to elasticsearch. not sure where i was missed. I didn't see any error on logstash log file. below are the logstash conf file.

input {
jdbc {
jdbc_validate_connection => "true"
jdbc_connection_string => "jdbc:oracle:thin:@//server:1521/db"
jdbc_user => "user"
jdbc_password => "pass"
jdbc_driver_library => "/etc/logstash/files/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_paging_enabled => "true"
schedule => "* * * * *"
statement_filepath => "/etc/logstash/files/keycount.sql"
use_column_value => "true"
tracking_column => "timestamp"
last_run_metadata_path => "/etc/logstash/files/.logstash_jdbc_last_run"

}

}
output {
elasticsearch {
hosts => "localhost:9200"
index => "keyinventory-%{+YYYY}"
}

}

Please some one help me, where I was missing.

Thanks and regards,
Uday.

Hi

Try commenting out the elasticsearch{} output for now, and use stdout{} to see what you get out of your input and filters.

I'd also reccoment you to set a schedule that will execute your query at known times, for better control.

Post the result here if you need further help.

Hope this helps

Hi Jordi,

Thank you for responding..

I have changed output and commented elasticsearch{}.

output {

    stdout { codec => rubydebug }
    file {  path=> /etc/logstash/files/keycount.log }

}

still didn't get any output.

Thanks & Regards,
Uday.

Hi

No output, and no errors in your logstash logs, might mean that either your input{} is not being executed, or that it returns no data.

Check your code with a simple query that you can plug in directly in your jdbc{} plugin (statement => "select ...."). Make sure you use a query that returns a known result.

Besides, you might want to add sql_log_level => "debug" to your jdbc{}`.

Post your log and output, if any.

Hi Jordi,

I have changed conf file as below.
input {
jdbc {
jdbc_validate_connection => "true"
jdbc_driver_library => "/etc/logstash/files/ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:@server:1521/db"
jdbc_user => "user"
jdbc_password => "pass"
jdbc_paging_enabled => "true"
schedule => "*/5 * * * *"
statement => "SELECT SYSTIMESTAMP AS timestamp,utl_inaddr.get_host_name AS Server FROM dual WHERE timestamp > :sql_last_start;"
use_column_value => "true"
tracking_column => "timestamp"
last_run_metadata_path => "/etc/logstash/files/last_run.txt"

}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "keyinventory-%{+YYYY}"
}
stdout { codec => rubydebug }
}

here I got the logstash log message as below ..

[2020-01-24T16:38:46,096][INFO ][logstash.outputs.elasticsearch][keyinventory] Using default mapping template
[2020-01-24T16:38:46,122][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][keyinventory] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-24T16:38:46,127][INFO ][logstash.javapipeline ][keyinventory] Starting pipeline {:pipeline_id=>"keyinventory", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, :thread=>"#<Thread:0x2a7ef768 run>"}
[2020-01-24T16:38:46,153][INFO ][logstash.outputs.elasticsearch][keyinventory] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-01-24T16:38:46,253][INFO ][logstash.javapipeline ][keyinventory] Pipeline started {"pipeline.id"=>"keyinventory"}
[2020-01-24T16:38:46,317][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:keyinventory], :non_running_pipelines=>}
[2020-01-24T16:38:46,639][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

could you please help me where I was missed

Thnaks & Regards,
Uday.

Hi

You have a pipeline called keyinventory which is up and running, but it doesn't seem to see your config file. Check your pipeline configuration and make sure your config file is in the right directory

Another thing you could try is to remove the WHERE clause from your select. Your test select should be as simple as possible, and should be self-contained, without any connection to logstash and its parameters. In your test select, as it is now, if there are no entries newer than :sql_last_start (what is its value?) you will get no output.

At this stage we are not trying to get the right data, we are trying to get your logstash config to work. Use a simple query that you have tested outside of logstash and that will give you a known result.

Hope this helps

Hi Jordi,
Thank you for the reply.

keyinventory is the pipeline name and given correct path of keyinventory.conf in the pipeline.yml.

removed :sql_last_start option in the query and restarted the logstash, below are the logstash log message .

[2020-01-27T07:16:18,884][INFO ][logstash.outputs.elasticsearch][keyinventory] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2020-01-27T07:16:18,941][INFO ][logstash.outputs.elasticsearch][keyinventory] Using default mapping template
[2020-01-27T07:16:18,980][INFO ][logstash.outputs.elasticsearch][keyinventory] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-01-27T07:16:19,056][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][keyinventory] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-01-27T07:16:19,059][INFO ][logstash.javapipeline ][keyinventory] Starting pipeline {:pipeline_id=>"keyinventory", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, :thread=>"#<Thread:0x47b4f5b9 run>"}
[2020-01-27T07:16:19,183][INFO ][logstash.javapipeline ][keyinventory] Pipeline started {"pipeline.id"=>"keyinventory"}
[2020-01-27T07:16:19,233][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:keyinventory], :non_running_pipelines=>}
[2020-01-27T07:16:19,474][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

can you please check and give me suggestion..

Regards,
Uday.

Hi

Does your query return any results, if run outside of logstash?

Can you share your pipeline.yml and keyinventory.conf contents? Please format them properly for our easier readiing.

Check also your schedule syntax. Off the top of my head it looks correct if you want to run it at 5 minute intervals, but don't trust it. Tripple check everyting, even if you are sure it is correct.

Hope this helps.

Hi Jordi,

Thank you for the reply..

find below for my pipeline.yml

  • pipeline.id: keyinventory
    path.config: "/etc/logstash/conf.d/keyinventory.conf"
    pipeline.workers: 3

and my keyinventory.conf is

input {
jdbc {
jdbc_validate_connection => "true"
jdbc_driver_library => "\etc\logstash\files\ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:@server:1521/db"
jdbc_user => "user"
jdbc_password => "pass"
jdbc_paging_enabled => "true"
schedule => "*/5 * * * *"
statement => "SELECT SYSTIMESTAMP AS timestamp,utl_inaddr.get_host_name AS Server FROM dual;"
use_column_value => "true"
tracking_column => "timestamp"
last_run_metadata_path => "\etc\logstash\files\last_run.txt"

}

}

output {
elasticsearch {
hosts => "localhost:9200"
index => "keyinventory-%{+YYYY}"
}

stdout { codec => rubydebug }
}

I have ran the query on database and got the result.

please check and let me know where i was missed.

regards,
Uday.

Hi

I fail to see what is wrong with your setup, all seems correct to me. Nevertheless, I see a couple of suspects.

The default path to the pipelines, path.config, is /usr/share/logstash/pipeline/, a directory containing all the pipeline .ymls. You have it pointing to a file, on a different directory. Might be worth trying to move your keyinventory.conf to the default location and fill your path.config with the directory, rather than the full path to the file. It works for me in my installation.

Another suspect is in the jdbc configuration parameters. I've never used Oracle with logstash, so I cannot know for sure, but you could check the path to the driver library and the driver class. Does it have to start with java::? The docs say it is needed in version 6, but you are in version 7. Mind you, if the error was in these parameters you'd probably be getting connection errors or driver not found errors.

Check your logs carefully. Sometimes there are errors at the start of the log, while the end of the log still looks like all went well, resulting in a pipeline that is up, but empty.

Hope someone else can chime in and shed some new light.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.