"Standalone Cluster" for Logstash monitoring

Hi,

I'm trying to setup a single node with Elasticsearch, Kibana and Logstash. After adding some filebeat agents to some servers I wanted to start centralizing logs, a new cluster was created under Kibana "monitoring". Everything was "fine" while using heartbeat and metricbeats (both directly connected to Elasticsearch). Filebeat directs its output to Logstash.

After some digging around, it seems like it has something to do with me configuring the outputs of the filebeats to go to Logstash. These are some of the online resources I checked before posting here:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html




This is my relevant portion of logstash settings (logstash.yml) file:

node.name: "elk01"
config.reload:
  automatic: true
  interval: 3s

#xpack.monitoring.cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"
#xpack.monitoring.cluster_uuid: xhb_MPjYRfeZMR4ORTAEaA
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id: xhb_MPjYRfeZMR4ORTAEaA
#monitoring.cluster_uuid: xhb_MPjYRfeZMR4ORTAEaA
#monitoring.cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"

xpack.monitoring:
  enabled: false
  #cluster_uuid: xhb_MPjYRfeZMR4ORTAEaA
  #cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"
  elasticsearch:
    username: logstash_system
    password: xxxxx
    #cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"
    hosts: ["https://fqdn:9200"]
    ssl:
      certificate_authority: /etc/elasticsearch/certs/ca.crt
      verification_mode: certificate

log.level: info

queue:
  type: persisted
  max_bytes: 10gb

This is the current and temporary (for test purposes only) logstash configuration I'm using.

input {
        beats {
                port => "5044"
                ssl => true
                ssl_key => '/etc/logstash/certs/elk01_pck8.key'
                ssl_certificate => '/etc/logstash/certs/elk01.crt'
                ssl_certificate_authorities => '/etc/logstash/certs/ca.crt'
        }
}
output {
  file {
    path => '/tmp/output.logstash'
  }
}

I left the commented lines that did not worked. The error logged for those attempts was saying that:

[2020-05-22T13:14:54,047][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<ArgumentError: Setting "xpack.monitoring.cluster_uuid" hasn't been registered>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:69:in `get_setting'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:102:in `set_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:121:in `block in merge'", "org/jruby/RubyHash.java:1428:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:121:in `merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:179:in `validate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:284:in `execute'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:242:in `run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/share/logstash/lib/bootstrap/environment.rb:73:in `<main>'"]}

In the hosts I have metricbeat and filebeat enabled. In the all-in-one server I don't have filebeat but I do have heartbeat. This is how filebeat.yml looks like in the servers:

filebeat.inputs:
- type: log
  enabled: true
  paths: /opt/zimbra/log/mailbox.log
  multiline.pattern: ^\d{4}-\d{2}-\d{2}
  multiline.negate: true
  multiline.match: after
- type: log
  enabled: true
  paths:
    - /opt/zimbra/log/audit.log
    - /var/log/zimbra.log
- type: log
  enabled: false
  paths:
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true 
  reload.period: 3s
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.logstash:
  hosts: ["fqdn:5044"]
  ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"]
  ssl.certificate: "/etc/filebeat/certs/elk01.crt"
  ssl.key: "/etc/filebeat/certs/elk01.key"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
http.enabled: true
http.port: 5067
monitoring.enabled: false
monitoring.cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"

There are no modules enabled for filebeat.

This is how metricbeat is configured (metricbeat.yml)

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true 
  reload.period: 3s
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
setup.kibana:
  host: "https://fqdn:5601"
  ssl.certificate_authorities: ["/etc/metricbeat/certs/ca.crt"]
output.elasticsearch:
  hosts: ["https://fqdn:9200"]
  protocol: "https"
  username: "elastic"
  password: "xxxxx"
  ssl.enabled: true 
  ssl.certificate_authorities: ["/etc/metricbeat/certs/ca.crt"]
  ssl.certificate: "/etc/metricbeat/certs/elk01.crt"
  ssl.key: "/etc/metricbeat/certs/elk01.key"
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
http.enabled: true
monitoring.enabled: false
monitoring.cluster_uuid: "xhb_MPjYRfeZMR4ORTAEaA"

For metricbeat, I do have enabled a couple of modules which are system and beat-xpack. In the all-in-one I have some other modules enabled (logstash-xpack and kibana-xpack). Here are the confs for logstash-xpack and beat-xpack.

- module: logstash
  metricsets:
    - node
    - node_stats
  period: 10s
  xpack.enabled: true
  hosts: ["localhost:9600"]
  username: "elastic"
  password: "xxxxx"
  ssl.enabled: true 
  ssl.certificate_authorities: ["/etc/metricbeat/certs/ca.crt"]
  monitoring.override_cluster_uuid: xhb_MPjYRfeZMR4ORTAEaA
  monitoring.cluster_uuid: xhb_MPjYRfeZMR4ORTAEaA
- module: beat
  metricsets:
    - stats
    - state
  period: 10s
  xpack.enabled: true
  hosts: ["http://localhost:5066"]
  username: "elastic"
  password: "xxxxxx"
  ssl.enabled: true 
  ssl.certificate_authorities: ["/etc/metricbeat/certs/ca.crt"]

Even when logstash starts and stays up, I can see this in logs. Don't know if it is related/relevant for this issue:

[2020-05-22T13:23:00,320][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.

Any idea what am I doing wrong? My idea is to have a single-cluster (cluster-primero) with all the beats reporting correctly to it. I've tried to use the "beats" way of doing it instead of the "legacy monitoring" but i think I may be confusing some configurations between them both.

Thank you very much for any help you may provide.
Edit: Looking into the .monitoring-logstash indices, I see that from a specific hour the documents sent by metricbeat to this index, are missing the "cluster_uuid" and "logstash_stats.process.cpu.percent". So, for some reason the "cluster_uuid" was being sent but now is missing and thus, the "standalone cluster". I can't see how to fix this :confused:

ensure metricbeat and logstash are running the same minor version since fields may end up in different locations between minor versions.

1 Like

Please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

1 Like

What version of Logstash are you using? Support for monitoring.cluster_uuid didn't exist before 7.7.0.

1 Like

I feel really silly :confused: I assumed that they were all in the same version as I installed them all on the same day. For some reason, logstash was in 7.6.2. I just updated it to 7.7.x (no change in repo files...). So just now they are all running on the same version and it makes complete sense that's the error as per this attribute was not available until 7.7.

It's kind of hard for me to understand how this monitoring works really. Documentation and wizards don't always talk about the same steps to configure things and on top of that for some modules the "xpack" is needed and for others is not. Still learning though...hopefully a second lab will be better configured :slight_smile:

Thank you very much for your help!

Hi!

done, I just change all the images per code. You are completle right, sorry about that.

The problem (should) be solved now as per my mistake of not realizing what version of logstash I was using (insert "doh" homer gif here).

I will need to deploy again this environment anyways, hopefully with a cleaner idea of what and how to configure things. I'm quite confused with the monitoring part to be honest. I'm guessing once the "legacy" option dissapears, things will be more clear to me.

Thank you very much for your suggestion.

I was aware that the logstash version needed to be 7.7...I was "so sure" that the version I was using was 7.7 (all the other components are). Just when I was getting logstash version to update this post I saw that I was using 7.6. I'm gonna punish myself with a 10hs youtube video of nickelback songs for this.

Thank you very much!

1 Like

This is great feedback and we very much appreciate your honesty and perspective.

You are not wrong at all. It is complex and we offer multiple ways of doing "the same thing". We will take a note to improve the documentation, but we also feel confident that this will become simpler in the future.

2 Likes