Logstash 8.17.1 stuck on starting successfully

I used config file with syslog for input/output from Openshift few times without changes successfully. It stopped working for unknown reason. Attached is output with traces on logstash restart; logstash.yml attached. Below is config file that used to be working.

Config file in conf.d directory:

Anything standout?
Thanks.

`input {
syslog {
host => "DNSMASTERNODE1"
port => "5051"
}
}
filter {
grok {
match => ["message", "%{DATA:junk} %{DATA:kubetimestamp} %{DATA:kubehost} %{DATA:logger} %{DATA:junk1} %{DATA:junk2} %{DATA:junk3} %{GREEDYDATA:kubemessage}" ]
tag_on_failure => [ "invalidLogFormat" ]
}

mutate {
	remove_field => ["message"]
}

json {
	source => "kubemessage"
}

mutate {
	add_field => { "mySource" => "Openshift" }
	remove_field => ["tags","junk","junk1","junk2","junk3"]
}

}

output {
elasticsearch {
hosts => ["https://DNSDATANODE1:9200","https://DNSDATANODE2:9200","https://DNSDATANODE3:9200"]
index => "myindex-%{+YYYY.MM.dd}"
user => "elastic"
password => "password"
ssl_certificate_authorities => "/etc/logstash/ca.crt"
}
stdout { codec => rubydebug }
}`

What does the logstash log show?

You don't have to name fields and then delete them. Instead of

%{DATA:junk1} %{DATA:junk2} %{DATA:junk3}

you could use

%{DATA} %{DATA} %{DATA}

although I would suggest trying

%{NOTSPACE} %{NOTSPACE} %{NOTSPACE}

I actually see indices in my data directory. So the connector works. What changed is that I do not see any longer Kibana->index patternto map my index to the pattern to see the data in Kibana. Its strange. I thought this role comes by default in new installation. I need to add kibana_admin?