Unable to configure Elasticsearch to collect log from Openshift cluser

Hello,
I have a problem connection our Openshift cluster to the Elasticsearch.
I have tried with elasticsearch as an Input configuration option but failed to configure it I guess.
The error from elastic side is:
2024-09-19T15:38:00,091][WARN ][io.netty.channel.DefaultChannelPipeline][RDMT_OCP4][e841abae93d50ab931e28a2e8c9ef422155bd10bd989bddc5e908e4004b89e73] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 83
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 83
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.2.4.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.2.4.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
The error from the Openshift:
2024-09-19T12:38:34.640337Z WARN sink{component_kind="sink" component_id=elasticsearch_secure component_type=elasticsearch component_name=elasticsearch_secure}: vector::internal_events::http_client: HTTP error. error=connection closed before message completed error_type="request_failed" stage="processing" internal_log_rate_limit=true

12952024-09-19T12:38:34.640404Z WARN sink{component_kind="sink" component_id=elasticsearch_secure component_type=elasticsearch component_name=elasticsearch_secure}: vector::sinks::util::retries: Internal log [Retrying after error.] has been rate limited 3 times.
Also this is my configuration from the Openshift side:
metadata:
  name: elastic-app-rdmt
  namespace: rdmt-dev-ns
spec:
  outputs:
    - elasticsearch:
        version: 7
      name: elasticsearch-secure
      secret:
        name: elasticsearch
      tls:
        insecureSkipVerify: false
      type: elasticsearch
      url: 'https://elastic_cluster:5047'
  pipelines:
    - inputRefs:
        - application
      name: pipe-elastic-application
      outputRefs:
        - elasticsearch-secure
  serviceAccountName: elasticsearch-application-sa
And the logstash configuration:
input {
    elasticsearch {
        port => "5047"
                ssl => true
#                ssl_certificate_authorities => ["/etc/logstash/certs/ocp4-ca.pem"]
                ssl_certificate => "/etc/logstash/certs/keystore*****.crt"
                ssl_key => "/etc/logstash/certs/keystore-*****.key"
                ssl_verify_mode => "none"
    }
}
output {
        elasticsearch {
                hosts => ["https://localhost:9200"]
                ssl => true
                ssl_certificate_verification => false
                user => "elastic"
                password => "changeme"
                                keystore => "/etc/logstash/certs/keystore_*****.p12"
                keystore_password => "pass"
                truststore => "/etc/logstash/certs/keystore_*****.p12"
                truststore_password => "pass"
                                index => "rdmt_ocp4_%{+YYYY.MM.dd}"
        }
}

From Elastic Search to Logstash