Monitoring using metricbeat

  1. I turned on self-monitoring that I want to roll-back and migrate to metricbeat based monitoring for the cluster. I clicked on set self-monitoring which led to the following:

I searched elasticsearch.yml but could not locate this settings. Where can I find this setting to revert it.

  1. I have carried out entire guide to start monitoring using metricbeat & following are the configuration:

A single system is hosting metricbeat and elasticsearch. I have run the following command successfully;

metricbeat modules enable elasticsearch-xpack

File: elasticsearch.yml

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.6/metricbeat-module-elasticsearch.html

- module: elasticsearch
  metricsets:
    - node
    - node_stats
  period: 10s
  #hosts: ["http://localhost:9200"]
  hosts: ["https://IP of the node:9200"]
  protocol: "https"
  username: "ID"
  password: "password"
  ssl.certificate_authorities: /etc/metricbeat/elasticsearch-ca.pem
  ssl.verification_mode: none
  #username: "user"
  #password: "secret"

File: elasticsearch-xpack.yml

 Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.7/metricbeat-module-elasticsearch.html

- module: elasticsearch
  metricsets:
    - ccr
    - cluster_stats
    - enrich
    - index
    - index_recovery
    - index_summary
    - ml_job
    - node_stats
    - shard
  period: 10s
  hosts: ["https://IP of the node:9200"]
  protocol: "https"
  username: "id"
  password: "password"
  ssl.certificate_authorities: /etc/metricbeat/elasticsearch-ca.pem
  ssl.verification_mode: none
  xpack.enabled: true

I have done the following to set up metricbeat based monitoring however the migration wizad is unable to find the logs. I am getting following prompt only:

image

Finally, I am student running this setup on a VM on a NAS. My processor is constantly at ~95%. Is there a computation offset to send these logs first to a Raspberry Pi running logstash pipeline and later to elasticsearch in batches? Maybe help me get some compute back on the VM?

RPi running logstash:

Thank you.

Hi @parthmaniar,

Happy to help get you sorted here.

Let's take a look at the monitoring data and see what's going on.

Can you return the results of GET _cat/indices/.monitoring* on the monitoring cluster?

Hi @chrisronline thank you very much for assisting.

Before I answer, its important to know that I am facing stability issues so I issued the following command few minutes back (in case it matters)

PUT https://IP of ES:9200/_cluster/settings
{
    "persistent": {
        "xpack": {
            "monitoring": {
                "collection": {
                    "enabled": null
            }
        }
    },
    "transient": {}
}
}

Here is the output you asked for:

green open .monitoring-es-7-2020.06.11          C3mUbMHvQreLeJt_qf6OQg 1 0 606092 125791 225.3mb 225.3mb
green open .monitoring-kibana-7-2020.06.11      env7GHFmRHWvaixxSJFGHw 1 0   2632      0 692.4kb 692.4kb
green open .monitoring-logstash-7-mb-2020.06.11 TDPe2LU0SIiOl0wJOHZGgg 1 0 413259      0  13.6mb  13.6mb
green open .monitoring-es-7-mb-2020.06.11       2ja6LnUZSMytBSMy7_2PIQ 1 0 131247      0   121mb   121mb

Can you run this against your monitoring cluster and return with the results?

POST .monitoring-es-*/_search
{
  "size": 0,
  "aggs": {
    "clusters": {
      "terms": {
        "field": "cluster_uuid",
        "size": 20
      },
      "aggs": {
        "index": {
          "terms": {
            "field": "_index",
            "size": 1
          }
        },
        "types": {
          "terms": {
            "field": "type",
            "size": 10
          },
          "aggs": {
            "last_seen": {
              "max": {
                "field": "timestamp"
              }
            }
          }
        }
      }
    }
  }
}

Sure, thank you very much for replying. Here is the output.

{
  "took" : 6700,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "clusters" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : "-26z29GGRyWZ-8MuAvjaWw",
          "doc_count" : 3337955,
          "types" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [
              {
                "key" : "index_stats",
                "doc_count" : 2507185,
                "last_seen" : {
                  "value" : 1.593456846013E12,
                  "value_as_string" : "2020-06-29T18:54:06.013Z"
                }
              },
              {
                "key" : "shards",
                "doc_count" : 657245,
                "last_seen" : {
                  "value" : 1.593456562847E12,
                  "value_as_string" : "2020-06-29T18:49:22.847Z"
                }
              },
              {
                "key" : "enrich_coordinator_stats",
                "doc_count" : 34802,
                "last_seen" : {
                  "value" : 1.593456842875E12,
                  "value_as_string" : "2020-06-29T18:54:02.875Z"
                }
              },
              {
                "key" : "index_recovery",
                "doc_count" : 34766,
                "last_seen" : {
                  "value" : 1.593456843891E12,
                  "value_as_string" : "2020-06-29T18:54:03.891Z"
                }
              },
              {
                "key" : "indices_stats",
                "doc_count" : 34759,
                "last_seen" : {
                  "value" : 1.593456844176E12,
                  "value_as_string" : "2020-06-29T18:54:04.176Z"
                }
              },
              {
                "key" : "node_stats",
                "doc_count" : 34738,
                "last_seen" : {
                  "value" : 1.593456843837E12,
                  "value_as_string" : "2020-06-29T18:54:03.837Z"
                }
              },
              {
                "key" : "cluster_stats",
                "doc_count" : 34460,
                "last_seen" : {
                  "value" : 1.593456843099E12,
                  "value_as_string" : "2020-06-29T18:54:03.099Z"
                }
              }
            ]
          },
          "index" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 2338479,
            "buckets" : [
              {
                "key" : ".monitoring-es-7-mb-2020.06.28",
                "doc_count" : 999476
              }
            ]
          }
        }
      ]
    }
  }
}

Thanks for that. I'm going to slightly change the query and if you can return the results again, that'd be great.

POST .monitoring-es-*/_search
{
  "size": 0,
  "aggs": {
    "clusters": {
      "terms": {
        "field": "cluster_uuid",
        "size": 20
      },
      "aggs": {
        "index": {
          "terms": {
            "field": "_index",
            "size": 100,
          }
        },
        "types": {
          "terms": {
            "field": "type",
            "size": 10
          },
          "aggs": {
            "last_seen": {
              "max": {
                "field": "timestamp"
              }
            }
          }
        }
      }
    }
  }
}

Absolutely no problem. I couldn't not the output for your new query as it is showing a construct error: