Logstash pipeline port is not visible outside Openshift/Kubernetes

Hello all -

I have setup ELK cluster on Openshift/Kubernetes with TLS. All components work 
fine. But when I configure filebeat to my logstash with pipeline port(created 
service with a nodeport), it is unable to find the port and getting connection 
refused. I created filebeat inside the cluster and it works fine. The issue is 
publishing outside the cluster. Filebeat logstash output is looking for tcp address 
as logstash:<port>. If I give logstash:443, i get lumber jack error. If I give 
logstash:port, I get connection refused. I think the logstash itself is embedded with 
port 443 which will route to my  port. But Filebeat puts default port 5044 if I 
remove the port number. How do I over come this?


Thanks

Whay does you Logstash and Filebeat configuration look like? What is running where?



**Here is logstash deployment config**

kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
  name: logstash2
  namespace: elk-test-tls
  selfLink: >-
    /apis/apps.openshift.io/v1/namespaces/elk-test-tls/deploymentconfigs/logstash2
  uid: 371d8e2d-794f-11ea-a822-0a58ac140205
  resourceVersion: '59513768'
  generation: 13
  creationTimestamp: '2020-04-08T04:12:52Z'
  labels:
    app.kubernetes.io/part-of: elasticsearch2
    run: logstash2
spec:
  strategy:
    type: Rolling
    rollingParams:
      updatePeriodSeconds: 1
      intervalSeconds: 1
      timeoutSeconds: 600
      maxUnavailable: 25%
      maxSurge: 25%
    resources: {}
    activeDeadlineSeconds: 21600
  triggers:
    - type: ConfigChange
  replicas: 1
  revisionHistoryLimit: 10
  test: false
  selector:
    run: logstash2
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: logstash2
    spec:
      volumes:
        - name: volume-rz4kz
          configMap:
            name: logstash2
            defaultMode: 420
        - name: volume-l5tnc
          emptyDir: {}
        - name: logstash2-certs-claim
          persistentVolumeClaim:
            claimName: logstash2-certs-claim
        - name: logstash2-pipeline-claim
          persistentVolumeClaim:
            claimName: logstash2-pipeline-claim
      containers:
        - resources: {}
          terminationMessagePath: /dev/termination-log
          name: logstash2
          command:
            - bash
            - /etc/logstash2/logstash2-wrapper.sh
          env:
            - name: LOGSTASH_HOME
              value: /usr/share/logstash
            - name: xpack.monitoring.elasticsearch.hosts
              value: 'https://est03.elk-test-tls.svc.cluster.local'
            - name: path.config
              value: /usr/share/logstash/pipeline-config
            - name: http.host
              value: 0.0.0.0
            - name: node.name
              value: logstash2.elk-test-tls.svc.cluster.local
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: volume-rz4kz
              mountPath: /etc/logstash2
            - name: volume-l5tnc
              mountPath: /var/log2
            - name: logstash2-certs-claim
              mountPath: /usr/share/logstash/config/certs
            - name: logstash2-pipeline-claim
              mountPath: /usr/share/logstash/pipeline-config
          terminationMessagePolicy: File
          image: 'logstash:7.5.0'
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
status:
  observedGeneration: 13
  details:
    message: config change
    causes:
      - type: ConfigChange
  availableReplicas: 1
  unavailableReplicas: 0
  latestVersion: 9
  updatedReplicas: 1
  conditions:
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-04-08T04:37:32Z'
      lastTransitionTime: '2020-04-08T04:37:29Z'
      reason: NewReplicationControllerAvailable
      message: replication controller "logstash2-9" successfully rolled out
    - type: Available
      status: 'True'
      lastUpdateTime: '2020-04-08T04:40:17Z'
      lastTransitionTime: '2020-04-08T04:40:17Z'
      message: Deployment config has minimum availability.
  replicas: 1
  readyReplicas: 1

    indent preformatted text by 4 spaces


**Here is my  logstash2 service**

kind: Service
apiVersion: v1
metadata:
  name: logstash2
  namespace: elk-test-tls
  selfLink: /api/v1/namespaces/elk-test-tls/services/logstash2
  uid: 74a2c00f-7a87-11ea-a0f4-0050568fc9a0
  resourceVersion: '60394210'
  creationTimestamp: '2020-04-09T17:27:58Z'
  labels:
    app.kubernetes.io/part-of: elasticsearch2
    run: logstash2
spec:
  ports:
    - protocol: TCP
      port: 5033
      targetPort: 8000
      nodePort: 31910
  selector:
    pod: logstash2
  clusterIP: 172.30.225.143
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster
status:
  loadBalancer: {}


**Here is Filebeat config:**
  hosts: ["logstash2-elk-test-tls.apps.devops3.flex.com:31910"]
  ssl.certificate_authorities: ["/cert/ca.crt"]
  ssl.certificate: "/cert/logstash2.crt"
  ssl.key: "/cert/logstash2.pkcs8.key"

Anyone please help?

What does your Logstash pipeline look like?

**Here is my pipeline** 

input {
  beats {
    port => 5033
    ssl => true
    ssl_key => '/usr/share/logstash/config/certs/logstash1.pkcs8.key'
    ssl_certificate => '/usr/share/logstash/config/certs/logstash1.crt'
  }
}
filter {
}
output {
      if [fields][log_type] == "rp_ams1" {
        elasticsearch {
                hosts => ["https://est01.elk-test-tls2.svc.cluster.local"]
                index => "rp_ams1_log-%{+YYYY.MM.dd}"
                ssl => true
                ssl_certificate_verification => true
                cacert => '/usr/share/logstash/config/certs/ca.crt'
                user => 'logstash_writer'
                password => 'xxxx'
        }
      }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.