Goal: Using Filebeat and Logstash, filter / ship kubernetes pod logs to our hosted elasticsearch instance.
Problem: According to the documentation there are two ways to configure Logstash ,with Cloud ID and without Cloud ID. Ideally I would like to use the Cloud ID configuration, but have been unable to do so with either.
Question: What would be the necessary configuration fields needed in both the logstash.yml
and logstash.conf
, to correctly ship logs to a hosted (remote) elasticsearch instance and configure a centralized pipeline management
Error Logs
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-07-08T22:46:40,894][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-07-08T22:46:40,972][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-07-08T22:46:41,614][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
[2020-07-08T22:46:41,660][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"bdf7ae65-f159-46f4-8a49-eb1fdbfa472d", :path=>"/usr/share/logstash/data/uuid"}
[2020-07-08T22:46:42,461][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-07-08T22:46:42,465][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2020-07-08T22:46:43,749][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-07-08T22:46:44,052][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2020-07-08T22:46:44,118][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-07-08T22:46:44,128][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2020-07-08T22:46:44,188][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2020-07-08T22:46:46,149][INFO ][org.reflections.Reflections] Reflections took 119 ms to scan 1 urls, producing 21 keys and 41 values
[2020-07-08T22:46:46,751][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://snelson:xxxxxx@2b793090ac9a45c08becf32d19101173.us-west-2.aws.found.io:443/]}}
[2020-07-08T22:46:47,190][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@xxx.us-west-2.aws.found.io:443/"}
[2020-07-08T22:46:47,482][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-07-08T22:46:47,495][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-07-08T22:46:47,591][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://2b793090ac9a45c08becf32d19101173.us-west-2.aws.found.io:443"]}
[2020-07-08T22:46:47,733][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
[2020-07-08T22:46:47,847][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-07-08T22:46:47,945][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-07-08T22:46:47,956][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x2072c360 run>"}
[2020-07-08T22:46:49,492][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:9600"}
[2020-07-08T22:46:49,514][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-07-08T22:46:49,635][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-07-08T22:46:49,814][INFO ][org.logstash.beats.Server][main][dd04dba50d0fa8d807f63f69312096fd6db963bc2333e64a5bc4d9ac3fc7b23c] Starting server on port: 9600
[2020-07-08T22:46:50,224][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601}
[2020-07-08T22:47:14,122][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
logstash.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
data:
logstash.yml: |-
# xpack.monitoring.enabled: true
# xpack.management.enabled: true
# without Cloud ID
# xpack.monitoring.elasticsearch.username: logstash_internal
# xpack.monitoring.elasticsearch.password: XXX
# xpack.monitoring.elasticsearch.hosts: ["XXX"]
# without Cloud ID
# xpack.management.elasticsearch.username: logstash_internal
# xpack.management.elasticsearch.password: XXX
# xpack.management.elasticsearch.hosts: ["XXX"]
# with Cloud ID
# xpack.monitoring.elasticsearch.cloud_id: "XXX"
# xpack.monitoring.elasticsearch.cloud_auth: "XXX"
# with Cloud ID
# xpack.management.elasticsearch.cloud_id: "XXX"
# xpack.management.elasticsearch.cloud_auth: "XXX"
logstash.conf: |-
input {
beats {
port => 9600
}
}
filter {}
output {
elasticsearch {
hosts => ["xxx"]
user => xxx
password => xxx
cloud_id => xxx
cloud_auth => xxx
}
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: logstash
spec:
selector:
matchLabels:
k8s-app: logstash
template:
metadata:
labels:
k8s-app: logstash
spec:
hostname: logstash
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.7.0
ports:
- containerPort: 9600
name: logstash
volumeMounts:
- name: logstash-config
mountPath: /usr/share/logstash/pipeline/
command:
- logstash
volumes:
- name: logstash-config
configMap:
name: logstash-config
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash
spec:
type: NodePort
selector:
app: logstash
ports:
- protocol: TCP
port: 9600
targetPort: 9600
name: logstash
filebeat.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
templates:
- condition:
equals:
kubernetes.namespace: default
config:
- type: container
paths:
- /var/log/containers/*-${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
enabled: true
hosts: ["logstash:9600"]
logging.level: debug
logging.to_files: false
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
...
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
...