Elastic cloud on kubernetes - logstash question

Hi,

I'm trying to setup a logstash pipeline to receive tcp syslog from remote sources logstash on elastic cloud on kubernetes, Openshift 4.12. The sending source must send TCP port 6514 and use TLS to make the connection.

This is my configuration, please tell me why I can't receive traffic from outside of the cluster.

Questions:

  1. Is it possible to forward traffic from outside the cluster into Openshift with route?
  2. Do I need to to use loadBalancer with metalLB and is it possible to use TCP with loadBalancer?
  3. I can curl the route and make a connection, see below. Does this mean that I can forward traffic to Logstash now?
  4. Can this be done with another technique?
  5. I can see in kibana that the connection is made, but no syslog data is forwarded, see picture below.

Thanks!

This is when I use curl, seems that I can open a session to the logstash TCP port but no traffic is received on and forwarded to elasticsearch.

https://logstash.xxx.xxx.xxx
*   Trying 10.10.xxx.xxx:443...
* Connected to logstash.dev.xxx.xxx (10.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: ca.crt
*  CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*.dev.xxx.xxx.xxx
*  start date: Feb 22 08:50:35 2023 GMT
*  expire date: Feb 21 08:50:35 2026 GMT
*  subjectAltName: host "logstash.xxx.xxx.xxx" matched cert's "*.dev.xxx.xxx.xxx"
*  issuer: DC=com; DC=xxx; CN=Issue CA xxx
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: logstash.xxx.xxx.xxx
> User-Agent: curl/7.79.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
I can write here
Does this work now?

This is my deployment configuration!

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  labels:
    app.kubernetes.io/name: eck-logstash
    app.kubernetes.io/component: logstash
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-pipeline
  labels:
    app.kubernetes.io/name: eck-logstash
    app.kubernetes.io/component: logstash
data:
  logstash.conf: |
    input {
      tcp {
        port => 6514
        type => syslog
        ssl_cert => '/etc/logstash/certificates/tls.crt'
        ssl_certificate_authorities => ['/etc/logstash/certificates/ca.crt']
        ssl_key => '/etc/logstash/certificates/key/tls.key'
        ssl_enable => true
        ssl_verify => true
      }
    }
    filter {
      grok {
        match => { "message" => "%{GREEDYDATA:message}"}
      }
      geoip {
        source => "clientip"
        target => "clientgeo"
      }
    }
    output {
      elasticsearch {
        hosts => [ "${ES_HOSTS}" ]
        user => "${ES_USER}"
        password => "${ES_PASSWORD}"
        cacert => '/etc/logstash/certificates/ca.crt'
        index => "logstash-beta-%{+YYYY.MM.dd}"
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  labels:
    app.kubernetes.io/name: eck-logstash
    app.kubernetes.io/component: logstash
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: eck-logstash
      app.kubernetes.io/component: logstash
  template:
    metadata:
      labels:
        app.kubernetes.io/name: eck-logstash
        app.kubernetes.io/component: logstash
    spec:
      containers:
        - name: logstash
          image: docker.elastic.co/logstash/logstash:8.6.1
          ports:
            - name: "tcp-beats"
              containerPort: 5044
            - name: "https"
              containerPort: 6514
              protocol: TCP
          env:
            - name: ES_HOSTS
              value: "https://esdev-data-ingest.elastic-dev.svc:9200"
            - name: ES_USER
              value: "elastic"
            - name: ES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: esdev-es-elastic-user
                  key: elastic
          volumeMounts:
            - name: config-volume
              mountPath: /usr/share/logstash/config
            - name: pipeline-volume
              mountPath: /usr/share/logstash/pipeline
            - name: ca-certs
              mountPath: /etc/logstash/certificates
              readOnly: true
            - name: tls-key
              mountPath: /etc/logstash/certificates/key
              readOnly: true  
      volumes:
        - name: config-volume
          configMap:
            name: logstash-config
        - name: pipeline-volume
          configMap:
            name: logstash-pipeline
        - name: ca-certs
          secret:
            secretName: esdev-es-http-certs-public
        - name: tls-key
          secret:
            secretName: esdev-es-http-private-key
---
apiVersion: v1
kind: Service
metadata:
  name: logstash
  namespace: elastic-dev
  labels:
    app.kubernetes.io/name: eck-logstash
    app.kubernetes.io/component: logstash
spec:
  ports:
    - name: "https"
      port: 6514
      protocol: TCP
      targetPort: 6514
  selector:
    app.kubernetes.io/name: eck-logstash
    app.kubernetes.io/component: logstash
  type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: logstash-route
  namespace: elastic-dev
spec:
  host: logstash.dev.xxx.xxx.xxx
  port:
    targetPort: https
  tls:
    termination: passthrough
    insecureEdgeTerminationPolicy: Redirect
  to:
    kind: Service
    name: logstash

I can see that the connection is made in kibana, but no data/syslog is forwarded to logstash / elasticsearch.

Any input from the community?

Hello,

Please avoid bumping up your post, there is no SLA on the forum and not even 24 hours has passed from your question.

This is unrelated to Logstash, as I said on your previous question this is a network/infrastructure issue. you need to check with some openshift community.

If you can connect from outside your openshift cluster to the Logstash port, than you probably can send data to Logstash.

Not sure what you mean with this.

The screenshot you shared shows that Logstash were able to receive data and send it to Elasticsearch, if you syslog still can't send data you need to troubleshoot it, maybe you have some connectivity issue between the syslog and logstash.

Do you have any errors/warn logs in Logstash log? If you have, please share it.

Hi @leandrojmp

Thank you for the reply!

I been on it for a couple of days without success, been frustrated sorry for the bump.

I will forward request to the Openshift community also.

This is the latest log when I try to curl from the web browser

Maybe this is releated to then logstash TCP input? Maybe I should try with ssl disabled.

Caused by: javax.crypto.BadPaddingException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
        at sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1898) ~[?:?]
        at sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:239) ~[?:?]
        at sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:196) ~[?:?]
        at sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:159) ~[?:?]
        at sun.security.ssl.SSLTransport.decode(SSLTransport.java:111) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
        at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
        at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:298) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1338) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1280) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 17 more
[ERROR] 2023-03-01 13:11:18.512 [nioEventLoopGroup-2-13] tcp - /10.128.0.2:36320: closing due:
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Empty client certificate chain
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Empty client certificate chain
        at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
        at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:314) ~[?:?]
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:305) ~[?:?]
        at sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1194) ~[?:?]
        at sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1181) ~[?:?]
        at sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:396) ~[?:?]
        at sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:480) ~[?:?]
        at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1277) ~[?:?]
        at sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1264) ~[?:?]
        at java.security.AccessController.doPrivileged(AccessController.java:712) ~[?:?]
        at sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1209) ~[?:?]
        at io.netty.handler.ssl.SslHandler.runAllDelegatedTasks(SslHandler.java:1512) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1526) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1390) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1234) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1280) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 17 more

Curl won't work for your test, it will only test the connection, it is a TCP input, curl uses HTTP, it is a little different, you need to configure your syslog to use TCP with SSL and try to see if it can send data.

Hi @leandrojmp

We ended up opening up for UDP traffic on port 514.

We couldn't create a secure channel between the network device and Openshift since it's not possible to open a custom port on the Openshift platform. It's either custom port and unsecure TCP connection or ingress and those ports are expose at port 80 or 443.

Thanks.

Paging @Sunile_Manjee :slight_smile:

Perhaps a place for 1ClickECK

1 Like
  1. Do I need to to use loadBalancer with metalLB and is it possible to use TCP with loadBalancer?

In K8s land if you want to expose the application and its port, you can use a service type load balancer. in OpenShift i believe the default is HAproxy. run kubect get service and get the service end point.

One possible way

  1. Open the OpenShift web console and navigate to the project where your service is deployed.
  2. Click on the "Services" tab in the left-hand menu.
  3. Find the service you deployed and click on its name to open its details page.
  4. On the service details page, you should see a section called "Endpoints". This section will list the IP addresses and ports of the pods that are backing the service.
  5. Copy one of the IP addresses listed in the "Endpoints" section and append the port number (6514 in your case) to the end of the IP address.
  6. This URL should now be the service endpoint URL that you can use to access your load balancer service.

then test the connection

telnet <hostname or IP address> 6514

Have you tried this?

1 Like

Hi @Sunile_Manjee
Thank you for the detailed guide to expose the logstash endpoint with loadbalancer.

I have done this before and it works with the loadbalancer service. I'm using metalLB and I can access logstash with TCP connection. However, I need the traffic to be encrypted because the sending ciscoFMC device is hardcoded to send audit logs with:

  1. syslog UDP on port 514
  2. TCP/TLS on port 6514.

When I use a terminal to telnet to the loadbalancer ip with port 6514, I'm able to transfer traffic to logstash. Since the loadbalancer service is only offering a TCP unencrypted connection, the transfer between ciscoFMC and logstash won't get through.

This is my conclusion from the research I have done. Please if this can get done, I will happily try it out.

Thanks!

My apologies if I didn't understand your challenge previously. Still making sure I get it.

You can forward your secure traffic to a LB and terminate the cert, forward to logstash encrypted.

  1. Configure your load balancer to terminate SSL/TLS connections on port 6514 and forward the decrypted traffic to the Logstash instances. You will need to provide the SSL/TLS certificate and private key of the load balancer to your clients so that they can establish a secure connection.
  2. Configure your Cisco FMC to send data to the load balancer using the TLS protocol on port 6514. You will need to provide the SSL/TLS certificate of the load balancer to the Cisco FMC so that it can verify its identity and establish a secure connection.
  3. Configure your Logstash instances to listen for TCP traffic on port 6514 without SSL/TLS encryption. This is because the load balancer will be handling the SSL/TLS encryption and forwarding the decrypted traffic to the Logstash instances.

You may be able do pass through ssl as well. You'll need to provide the SSL/TLS certificate of Logstash to the Cisco FMC so that it can verify its identity and establish a secure connection.

input {
  tcp {
    port => 6514
    ssl_enable => true
    ssl_cert => "/path/to/logstash.crt"
    ssl_key => "/path/to/logstash.key"
  }
}```

Hi @Sunile_Manjee

Thank you for the deep and well-formulated answer.

However, I have done all of the steps above but the part that I can't get is how to terminate TCP/TLS on metalLB loadbalancer? The loadbalancer will forward the traffic to the logstash pod, were the ssl decrypt will be terminted I guess? That's why we are providing the certificate on the input part of the logstash filter?

Thank you.

Do you want to terminate the ssl on MetalLB or have it passthrough to logstash?

From my research, I see that MetalLB, by itself, is a Layer 2 and Layer 3 load balancer for Kubernetes, which means it operates at the network and IP layers, not the application layer (Layer 7). As such, it doesn't provide SSL termination functionality.

To achieve SSL termination with MetalLB, you would need to use an additional Layer 7 load balancer or reverse proxy that supports SSL termination. Nginx is one of the popular choices, but you can also use other options like HAProxy.

You would deploy the Layer 7 load balancer (e.g., Nginx) as a Kubernetes service, with MetalLB assigned to manage its external IP address. Then, you would configure the load balancer to terminate SSL and forward unencrypted traffic to Logstash. The MetalLB load balancer would route external traffic to the Layer 7 load balancer, which then takes care of the SSL termination and proxying.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.