In part, I think there will be some limitations with a 2015 Macbook Pro, so I plan to expand this to a cluster soon. The issue I was having though fundamentally is the creation and implementation of ELK on k8s. I was thinking there was a simple yaml which existed which, properly commented, explains how to stand up a simple system as a POC before expanding it. Therefore I am using Dockers implementation of Kubernetes, which I think is minikube or kubernetes-for-desktop or something.
I think my primary issue is related to linking things together. I have an elastic and kibana instance for example and they both seem to play together. This was established through some samples online. the issue i was starting to have though was moreso related to Logstash as it seems that it is a bit more complex. It seems to be unable to find the elasticsearch instances in which to submit information to.
The key issue i have been having is that logstash cant find the elasticsearch cluster. I think it might be related to maybe kubernetes internal networking or something?
This sample case wont need to be retains and can be stateless if it makes it easier to consume.
Elasticsearch:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: transit
spec:
version: 7.9.2
http:
service:
spec:
type: LoadBalancer
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: standard
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms2g -Xmx2g
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 2
Kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: transit
spec:
version: 7.9.2
count: 1
elasticsearchRef:
name: transit
But I cant seem to get elastic working correctly with this Logstash Info to follow.
logstash-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |=
input {
http_poller {
...
}
schedule => {
every => "2m"
}
codec => "json"
}
}
filter {
split {
field => "[bustime-response][vehicle]"
add_field => {
"geo_coord" => "%{lat},%{lon}"
}
}
}
output {
elasticsearch {
index => "transit-pittsburgh"
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
}
}
logstash deployment:
apiVersion: v1
kind: Pod
metadata:
labels:
app: logstash
name: logstash
spec:
containers:
- image: docker.elastic.co/logstash/logstash:7.9.2
name: logstash
ports:
- containerPort: 25826
- containerPort: 5044
env:
- name: ES_HOSTS
value: "https://transit-es-http:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: transit-es-elastic-user
key: elastic
resources: {}
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: cert-ca
mountPath: "/etc/logstash/certificates"
readOnly: true
restartPolicy: OnFailure
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
- name: cert-ca
secret:
secretName: transit-es-http-certs-public
status: {}
Logstash Service:
apiVersion: v1
kind: Service
metadata:
labels:
app: logstash
name: logstash
spec:
ports:
- name: "25826"
port: 25826
targetPort: 25826
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
status:
loadBalancer: {}
What Am I missing with these files? I figured to get a minimum working set first and then from there I can expand the elastic instances etc.