Enabling logstash centralised pipeline management

Hi Team,
I am trying to enabling logstash centralised pipeline management

I have configured below settings into logstash.yml

xpack.security.enabled: true
xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://17.99.223.232:80"
xpack.management.elasticsearch.username:
xpack.management.elasticsearch.password:
xpack.management.logstash.poll_interval: 5s
xpack.management.pipeline.id: ["main"]

2:26 PM

but getting issue likeSending Logstash logs to /usr/share/logstash/logstash-kafka which is now configured via log4j2.properties
[2019-10-16T08:39:04,443][FATAL][l.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "xpack.security.enabled" hasn't been registered>, :backtrace=>[ "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:36:in get_setting'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:69:in set_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:88:in block in merge'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:88:in merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:137:in validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:278:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:237:in run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `'"]}
[2019-10-16T08:39:04,450][ERROR][o.l.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

It looks like the Elasticsearch cluster does not have security enabled, which I believe is required in order to use this feature.

The docs state:

Centralized management is disabled until you configure and enable X-Pack security.

This includes Elasticsearch.

If I Enable Security feature in Elasticsearch Cluster Kibana dashboard goes down,could you please suggest on this.

Have you followed the instructions for enabling security in Elasticsearch and Kibana? You do need to also update the Kibana config. If it is still not working, please show your config.

Could you please check it out


apiVersion: v1
kind: ConfigMap
metadata:
name: logstash
data:
logstash.yml: |
http.host: "0.0.0.0"
#path.config: /usr/share/logstash/pipeline
path.logs: /usr/share/logstash/logstash-kafka
## Disable X-Pack
## see https://www.elastic.co/guide/en/x-pack/current/xpack-settings.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: es-coordinating:80
pipeline.workers: 6 ### (default value is Number of the host’s CPU cores )The number of workers that will, in parallel, execute the filter and output stages of the pipeline. If you find that events are backing up, or that the CPU is not saturated, consider increasing this number to better utilize machine processing power.
pipeline.batch.size: 100 ### The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs. Larger batch sizes are generally more efficient, but come at the cost of increased memory overhead. You may need to increase JVM heap space in the jvm.options config file.
config.reload.automatic: true ### When set to true, periodically checks if the configuration has changed and reloads the configuration whenever it is changed. This can also be triggered manually through the SIGHUP signal.
config.reload.interval: 30s ### How often in seconds Logstash checks the config files for changes.
xpack.security.enabled: true
xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://17.99.223.232:80"
xpack.management.elasticsearch.username:
xpack.management.elasticsearch.password:
xpack.management.logstash.poll_interval: 5s
xpack.management.pipeline.id: ["main"]
logstash-kafka.conf: |
input {
kafka {
bootstrap_servers => "rn2-gbikafkad-lapp01.corp.apple.com:9093,rn2-gbikafkad-lapp02.corp.apple.com:9093,rn2-gbikafkad-lapp03.corp.apple.com:9093"
codec => "json"
topics => "gbi_etl_monitoring"
security_protocol => "SSL"
ssl_key_password => "MfgcH25uKH"
ssl_keystore_location => "/usr/share/logstash/certs/keystore.jks"
ssl_keystore_password => "MfgcH25uKH"
ssl_truststore_location => "/usr/share/logstash/certs/truststore.jks"
ssl_truststore_password => "MfgcH25uKH"
consumer_threads => 4 ### consumer threads subscribe to kafka topic
group_id => "logstash-etl-fwk-monitoring-dev-USRNO3"
tags => ["logs"]
}
}
output {
if "logs" in [tags]{
elasticsearch {
hosts => ["http://es-coordinating:80"]
index => "etl-fw-monitoring-logs-%{+YYYY.MM.dd}"
}
}
}

You need to enable security in Elasticsearch. What does that config look like?

apiVersion: v1
kind: ConfigMap
metadata:
name: es-coordinating
labels:
zone: rno1
role: coordinating
data:
elasticsearch.yml: |
cluster.name: gbi-rno1-cluster
node.master: false
node.data: false
node.name: gbi-es-coordinating
node.ingest: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["es-coordinating","es-master","es-data"]
discovery.zen.minimum_master_nodes: 1
thread_pool.search.queue_size: 10000
node.ml: false
xpack.security.enabled: true
xpack.ml.enabled: true
search.max_buckets: 10000
reindex.remote.whitelist: es-coordinating.gbi-observer-dev.svc.lb.usrno3.acio.apple.com:80

There is more to enabling security than enabling that parameter, e.g. setting up users, roles and TLS. It does not seem like you have done any of that.

okay Thank you!

Hi If I enable xpack.security.enabled: true in elasticsearch cluster ,kibana dashboard is not displaying, could you please suggest on this

Have you followed the documentation and set it up correctly? Just enabling that flag is not sufficient.

To provide basic security xpack.security.enabled: true is enough,samething mentioned in below link

https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html

am enabling through the kubernates

Have you followed this guide?

Hi Team,

while deploying logstash am getting issue like this ,could you please help on this.

Events:

Type Reason Age From Message

Warning FailedScheduling 3m28s (x4 over 3m42s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 36 times)

Normal SuccessfulAttachVolume 3m19s attachdetach-controller AttachVolume.Attach succeeded for volume "usrno1-16b3651afc8b11e9a61d"

Normal Scheduled 3m19s default-scheduler Successfully assigned 2304670352/logstash-0 to acs-node65.usrno1.applecloud.io

Warning FailedMount 3m3s (x6 over 3m19s) kubelet, acs-node65.usrno1.applecloud.io MountVolume.SetUp failed for volume "config-pattern" : configmap "logstash" not found

Warning FailedMount 3m3s (x6 over 3m19s) kubelet, acs-node65.usrno1.applecloud.ioMountVolume.SetUp failed for volume "config-volume" : configmap "logstash" not found

Normal Started 2m46s kubelet, acs-node65.usrno1.applecloud.io Started container logstash

Normal Pulling 2m46s kubelet, acs-node65.usrno1.applecloud.io Pulling image "docker.apple.com/gbi/dev/logstash:6.7.0"

Normal Pulled 2m46s kubelet, acs-node65.usrno1.applecloud.io Successfully pulled image "docker.apple.com/gbi/dev/logstash:6.7.0"

Normal Created 2m46s kubelet, acs-node65.usrno1.applecloud.io Created container logstash

Normal Pulling 2m44s (x2 over 2m46s) kubelet, acs-node65.usrno1.applecloud.io Pulling image "docker.apple.com/splunk-ist/splunk-universalforwarder@sha256:5c014e573e68e1b0f83c86686ab86f896d6e339875051f65df41a1cd1a2deba4"

Normal Pulled 2m44s (x2 over 2m46s) kubelet, acs-node65.usrno1.applecloud.io Successfully pulled image "docker.apple.com/splunk-ist/splunk-universalforwarder@sha256:5c014e573e68e1b0f83c86686ab86f896d6e339875051f65df41a1cd1a2deba4"

Normal Created 2m43s (x2 over 2m46s) kubelet, [acs-node65.usrno1.applecloud.io]

(http://acs-node65.usrno1.applecloud.io/) Created container splunkforwarder

Normal Started 2m43s (x2 over 2m46s) kubelet, acs-node65.usrno1.applecloud.io Started container splunkforwarder

Warning Unhealthy 2m42s kubelet, acs-node65.usrno1.applecloud.io Readiness probe failed: Get http://172.16.15.14:9600/: dial tcp 172.16.15.14:9600: connect: connection refused

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.