Logstash not producing logs

command which I am executing: ./logstash -f Logstash.conf
Logstash.conf has below content:

input {
file {
path => "/tibco/tra/domain/TST36/application/logs/B2BHFAMAdapter-TST36V3B2BHFAMAdapterNewTST36-050000-1.log"
}
}
output {
file {
path => "/tibco/Logstash/outlog.log"
}
}

The console output is as below:
[tibadmin@sz3072 bin]$ ./logstash -f Logstash.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /tibco/Logstash/logstash-7.3.1/logs which is now configured via log4j2.properties
[2019-09-06T15:36:43,550][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-09-06T15:36:43,573][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.1"}
[2019-09-06T15:36:45,712][INFO ][org.reflections.Reflections] Reflections took 77 ms to scan 1 urls, producing 19 keys and 39 values
[2019-09-06T15:36:46,820][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2019-09-06T15:36:46,828][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>40, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>5000, :thread=>"#<Thread:0x640616ce run>"}
[2019-09-06T15:36:47,510][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/tibco/Logstash/logstash-7.3.1/data/plugins/inputs/file/.sincedb_b51ceb84388504f6fa7075c67a63ad57", :path=>["/tibco/tra/domain/TST36/application/logs/B2BHFAMAdapter-TST36V3B2BHFAMAdapterNewTST36-050000-1.log"]}
[2019-09-06T15:36:47,571][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-06T15:36:47,684][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-09-06T15:36:47,696][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-06T15:36:48,328][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>53680}

Please suggest what wring I am doing here ?

Thanks
Rachit Gargava

By default a file input will skip to the end of a file and wait for new data to be appended to it. If you want to read an existing file you need the start_position option, and possibly the sincedb_path option.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.