Splunk data ingestion into Elasticsearch via Logstash behind NLB

@Badger Sorry for delayed reply. We had dependency on external team to collect the Splunk forwarder configuration. Turns out that TCP forwarder is used in the configuration and not syslog format. Below is the Splunk forwarder configuration followed by Logstash configuration. Kindly suggest on changes required to have this data ingested.

Splunk forwarder configuration:

Current output.conf:

[tcpout]
forwardedindex.filter.disable = false
forwardedindex.0.blacklist =
forwardedindex.0.whitelist =
forwardedindex.1.blacklist =
forwardedindex.1.whitelist =
forwardedindex.2.whitelist =
forwardedindex.2.blacklist =
#forwardedindex.1.whitelist = test
#indexAndForward = true
maxQueueSize = 32MB
defaultGroup=EXT_cluster_or1,customer-server1

[tcpout:EXT_cluster_or1]
maxQueueSize = 32MB
sslCertPath = $SPLUNK_HOME/etc/auth/uf.pem
sslPassword =
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.crt

May want to change this

sslVerifyServerCert = false
autoLBFrequency = 60
forceTimebasedAutoLB = true
useACK = false
server = splunk-ext-d.xyz.net:9998

[tcpout:customer-server1]
server = logstash.abc.com:443,54.123.10.100:443,52.123.86.115:443 --- Load balancer IPs
sendCookedData = false
disabled = false
clientCert = /opt/splunkforwarder/etc/auth/mycerts/99bd8abc715cb7f7.pem
sslVerifyServerCert = true
server = logstash.abc.com:443

Current Logstash conf:
input {
tcp {
port => 5044
}
}
output {
file
{
path => "/etc/logstash/conf.d/out.log"
}
elasticsearch {
hosts => ["https://abc-elk-es.abc.com:443"]
index => "splunkindex"
user => "elkadmin"
password => "***********"
ilm_enabled => false
}
stdout { codec => rubydebug }
}