Hi,
I'm trying to setup a logstash pipeline to receive tcp syslog from remote sources logstash on elastic cloud on kubernetes, Openshift 4.12. The sending source must send TCP port 6514 and use TLS to make the connection.
This is my configuration, please tell me why I can't receive traffic from outside of the cluster.
Questions:
- Is it possible to forward traffic from outside the cluster into Openshift with route?
- Do I need to to use loadBalancer with metalLB and is it possible to use TCP with loadBalancer?
- I can curl the route and make a connection, see below. Does this mean that I can forward traffic to Logstash now?
- Can this be done with another technique?
- I can see in kibana that the connection is made, but no syslog data is forwarded, see picture below.
Thanks!
This is when I use curl, seems that I can open a session to the logstash TCP port but no traffic is received on and forwarded to elasticsearch.
https://logstash.xxx.xxx.xxx
* Trying 10.10.xxx.xxx:443...
* Connected to logstash.dev.xxx.xxx (10.x.x.x) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: ca.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, CERT verify (15):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.dev.xxx.xxx.xxx
* start date: Feb 22 08:50:35 2023 GMT
* expire date: Feb 21 08:50:35 2026 GMT
* subjectAltName: host "logstash.xxx.xxx.xxx" matched cert's "*.dev.xxx.xxx.xxx"
* issuer: DC=com; DC=xxx; CN=Issue CA xxx
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: logstash.xxx.xxx.xxx
> User-Agent: curl/7.79.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
I can write here
Does this work now?
This is my deployment configuration!
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
labels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
labels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
data:
logstash.conf: |
input {
tcp {
port => 6514
type => syslog
ssl_cert => '/etc/logstash/certificates/tls.crt'
ssl_certificate_authorities => ['/etc/logstash/certificates/ca.crt']
ssl_key => '/etc/logstash/certificates/key/tls.key'
ssl_enable => true
ssl_verify => true
}
}
filter {
grok {
match => { "message" => "%{GREEDYDATA:message}"}
}
geoip {
source => "clientip"
target => "clientgeo"
}
}
output {
elasticsearch {
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
index => "logstash-beta-%{+YYYY.MM.dd}"
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
template:
metadata:
labels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:8.6.1
ports:
- name: "tcp-beats"
containerPort: 5044
- name: "https"
containerPort: 6514
protocol: TCP
env:
- name: ES_HOSTS
value: "https://esdev-data-ingest.elastic-dev.svc:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: esdev-es-elastic-user
key: elastic
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: ca-certs
mountPath: /etc/logstash/certificates
readOnly: true
- name: tls-key
mountPath: /etc/logstash/certificates/key
readOnly: true
volumes:
- name: config-volume
configMap:
name: logstash-config
- name: pipeline-volume
configMap:
name: logstash-pipeline
- name: ca-certs
secret:
secretName: esdev-es-http-certs-public
- name: tls-key
secret:
secretName: esdev-es-http-private-key
---
apiVersion: v1
kind: Service
metadata:
name: logstash
namespace: elastic-dev
labels:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
spec:
ports:
- name: "https"
port: 6514
protocol: TCP
targetPort: 6514
selector:
app.kubernetes.io/name: eck-logstash
app.kubernetes.io/component: logstash
type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: logstash-route
namespace: elastic-dev
spec:
host: logstash.dev.xxx.xxx.xxx
port:
targetPort: https
tls:
termination: passthrough
insecureEdgeTerminationPolicy: Redirect
to:
kind: Service
name: logstash
I can see that the connection is made in kibana, but no data/syslog is forwarded to logstash / elasticsearch.