Logstash not picking the file after placing in the input path

I have placed the config.properties file with input of the log file path but whenever a new file is put in that location it is not fetching the latest file my conf as below

input {
file {
path => "/mnt/storage/logs/*"
type => "apache-access" # a type to identify those logs (will need this later)
start_position => "beginning"
discover_interval => 420
stat_interval => 10
ignore_older => 3000
}
}

Have you tried restarting logstash?

Have you looked at /var/log/logstash/logstash-plain.log to see If there are any errors that you are not picking up?

Hi Jasonespo,

Have tried starting it but no luck

Also no ERRORS in the log file

Why don't you use Filebeat to ship your logs from a file as this is its purpose?

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-getting-started.html

Will use filebeat once the format of the logs and all is finalised and fine as this is for the production, planning to do it sometime this month but want this to work for the log file as well

Run with "--log.level trace" to see what filewatch thinks is happening.

Where do we set that ?

Add

--log.level trace

to the command line, or else find the line in /etc/logstash/logstash.yml that says

# log.level: info

and change it to

log.level: trace

I am getting this error when i run with debug

I am getting as below when i run debug

[2019-05-13T22:46:33,094][TRACE][filewatch.tailmode.handlers.grow] reading... {"iterations"=>1, "amount"=>15660, "filename"=>"fty.txt"}
[2019-05-13T22:46:33,095][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk

It also showed as below but it dint write into elastic search nor present in Kibana

[2019-05-13T22:46:33,098][DEBUG][logstash.inputs.file ] Received line {:path=>"/mnt/storage/logs/fty.txt", :text=>"<L:RECORD><L:EPOCH>....etc

Further Details are , not sure why is it not picking

[2019-05-13T22:51:24,983][DEBUG][logstash.outputs.file ] Starting flush cycle
[2019-05-13T22:51:25,219][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-05-13T22:51:26,066][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-05-13T22:51:26,066][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-05-13T22:51:26,983][DEBUG][logstash.outputs.file ] Starting flush cycle
[2019-05-13T22:51:28,983][DEBUG][logstash.outputs.file ] Starting flush cycle
[2019-05-13T22:51:29,326][DEBUG][logstash.outputs.file ] Starting stale files cleanup cycle {:files=>{}}
[2019-05-13T22:51:29,326][DEBUG][logstash.outputs.file ] 0 stale files found {:inactive_files=>{}}
[2019-05-13T22:51:30,219][DEBUG][org.logstash.execution.PeriodicFlush] Pushing flush onto pipeline.
[2019-05-13T22:51:30,983][DEBUG][logstash.outputs.file ] Starting flush cycle
[2019-05-13T22:51:31,069][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2019-05-13T22:51:31,069][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2019-05-13T22:51:32,983][DEBUG][logstash.outputs.file ] Starting flush cycle
[2019-05-13T22:51:33,175][TRACE][filewatch.tailmode.processor] Delayed Delete processing
[2019-05-13T22:51:33,175][TRACE][filewatch.tailmode.processor] Watched + Active restat processing
[2019-05-13T22:51:33,175][TRACE][filewatch.tailmode.processor] Rotation In Progress processing
[2019-05-13T22:51:33,175][TRACE][filewatch.tailmode.processor] Watched processing
[2019-05-13T22:51:33,175][TRACE][filewatch.tailmode.processor] Active - no change {"watched_file"=>"<FileWatch::WatchedFile: @filename='fty.txt', @state='active', @recent_states='[:watched, :watched]', @bytes_read='32226604', @bytes_unread='0', current_size='32226604', last_stat_size='32226604', file_open?='true', @initial=false, @sincedb_key='406242 0 64770'>"}
[2019-05-13T22:51:34,984][DEBUG][logstash.outputs.file ] Starting flush cycle

If you go that line then logstash got an event. In which case your problem is not with the file input, it is elsewhere in the configuration. What does the rest of the configuration look like?

This is my config

input {
file {
path => "/mnt/storage/logs/*"
type => "apache-access" # a type to identify those logs (will need this later)
start_position => "beginning"
discover_interval => 420
stat_interval => 10
}
}

filter{
xml{
source => "message"
store_xml => false
xpath => ["//RECORD/MESSAGEID/text()", "logeventid","//RECORD/CATEGORY/text()", "capability","//RECORD/TEXT/text()", "actual_message","//RECORD/DATE/text()","message_date","//RECORD/TIME/text()","message_time","//RECORD/E2EDATA/text()","e2eData","//RECORD/SERVER/text()","server","//RECORD/PORT/text()","port"]
}

kv
{
source => "e2eData"
field_split => ",="
remove_field => ["E2E.threadID","E2E.busTxnHdr"]
}

if ([message] !~ "Access_298") {
drop { }
}

mutate {
lowercase => [ "capability" ]
}

mutate {
add_field => {
"message_dateTime" => "%{message_date}T%{message_time}"
#2017-11-15T05:42:29.485
}
remove_field => ["message_date","message_time","host"]
}
}

output {
stdout { codec => rubydebug }
file {
path => "/mnt/storage/logs/process.txt"
}

elasticsearch {
hosts => "http://:port"
index => "%{capability}"
}

}

You will ignore any records that do not contain the string "Access_298". Are you sure there are records that contain that? Do you see them in the file+rubydebug output?

kv { source => "e2eData" field_split => ",=" remove_field => ["E2E.threadID","E2E.busTxnHdr"] }

I do not see anything that would create those 2 fields, so why do you try to remove them?

elasticsearch { hosts => "http://:port" index => "%{capability}" }

That hosts option does not look right to me. And if xpath failed to parse capability this might not work.

Are you able to share an example of the rubydebug with as little as possible redacted?

Hi Badger,

Thanks for the reply

Regarding the host and port I have just given an example as I din want to expose the Actual host , my hosts look like http://localhost:61000

And the string doesn't have that Access_298 it have some other data

May I know which example are u looking for ?

Anything further help on this please ?

If it does not contain Access_298 then you call a drop {} filter, which discards it, so it will not get indexed.

Hi Badger,

We don't have those entries we have as below which should ideally get processed as this doesn't satisfy the above condition

<L:MESSAGEID>RS-RoBTESB_MPA-CAPAD_2983</L:MESSAGEID>

Then your test is backwards. You are dropping any events that do not match Access_298. Did you mean to write =~ rather than !~ perhaps?