Unable to view catalina.out file in Kibana

I have moved some catalina.out logs from a Production server to the test environment this is where elasticsearch is installed.

I am not able to view the logs. I have set up input/filter/output and I am not seeing the logs. I copied this from an online website.

input {
file {
type => "tomcat"
path => "/home/catalina/logs/audo01.catalina/catalina.out"
}
}

filter {
grok {
match => {
"message" => "%{COMBINEDAPACHELOG} %{IPORHOST:serverip} %{NUMBER:serverport} %{NUMBER:elapsed_millis} %{NOTSPACE:sessionid} %{QS:proxiedip} %{QS:loginame}"
}
overwrite => [ "message" ]
remove_field => [ "ident", "auth" ]
}
useragent {
source => "agent"
target => "ua"
remove_field => [ "agent" ]
}
mutate {
gsub => [
"request", "?.+", "",
"proxiedip", "(^"|"$)", "",
"loginame", "(^"|"$)" , "",
"referrer", "(^"|"$)" , ""
]
}
if [proxiedip] != "-" {
mutate {
replace => {
"clientip" => "%{proxiedip}"
}
}
}
if ![bytes] {
mutate {
add_field => {
"bytes" => "0"
}
}
}
mutate {
remove_field => ["proxiedip"]
}
mutate {
convert => {
"bytes" => "integer"
"elapsed_millis" => "integer"
"serverport" => "integer"
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}

output {
if "_grokparsefailure" not in [tags] {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "112.22.17.66:9200"
}
}
}

Divide and conquer. Is Logstash reading any events from the input file at all? Do you expect it to read the file from the beginning or just new entries? If the latter, are new entries being added? Does the user that Logstash runs as have access to the input logfile?

Magnus,

I expect to read the file from the beginning.
The files are manually moved to a folder.
Yes the user has access to the input logfile.
Do you have a script input/filter/output that I can try to use?

I expect to read the file from the beginning.

Then you must set start_position => beginning for the file input. Note that you most likely already have a sincedb file that points to the end of the file, so delete that file or set the file input's sincedb_path option to /dev/null to effectively disable sincedb. If the file is older than 24 hours you also need to adjust the ignore_older option.