[Unresolved] 5 Metricbeat Kibana Dashboards show various "X of X Shards Failed" messages

Host: Debian 9
ELK Stack version: 6.6.0
Logs sent to Logstash or Elasticsearch: Elasticsearch (hopefully Logstash later)

Summary:
When I go to Dashboards in Kibana and click on the [Metricbeat] dashboards, 5 of them say "X of X shards failed." (I'm unsure if I loaded templates/indexes correctly.) I would like to know what this means and how to fix it. Please provide the exact filepath to logs/files you'd like me to show. Thank you in advance :slight_smile:

Photos:

  1. [Metricbeat System] Containers overview (15 of 17 shards failed) x3

  2. [Metricbeat Docker] Overview (15 of 17 shards failed)

  3. [Metricbeat Redis] Overview (5 of 17 shards failed) x4

  4. [Metricbeat System] Overview (15 of 17 shards failed) x 2

  5. [Metricbeat Windows] Services (15 of 17 shards failed) x2

/etc/metricbeat/metricbeat.yml:

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.


# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

What do your Elasticsearch logs show? There should be something that indicates what the issue is there.

/var/log/elasticsearch/elasticsearch.log:

https://drive.google.com/file/...

...Shows the error (Failed to parse query [cat/indices?v]) over and over until EOF (seems to be an error resulting from the command you had me run in my other thread).


/var/log/elasticsearch/elasticsearch-2019-02-02-1.log:

https://drive.google.com/file/...

...Errors seem to be (java.lang.IllegalArgumentExcepction):

[2019-02-02T03:41:35,196][DEBUG][o.e.a.s.TransportSearchAction] [8dMqAA8] [metricbeat-2019.01.31][0], node[8dMqAA81SnCjXVDoTOAYnA], [P], s[STARTED], a[id=SQ_pHQHMSVCQLK_R7wmKPg]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[metricbeat-*], indicesOptions=IndicesOptions[ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=5, batchedReduceSize=512, preFilterShardSize=42, allowPartialSearchResults=true, source={"size":0,"timeout":"90s","query":{"bool":{"must":[{"range":{"@timestamp":{"from":1548992494872,"to":1549078894872,"include_lower":true,"include_upper":true,"format":"epoch_millis","boost":1.0}}},{"bool":{"must":[{"query_string":{"query":"*","default_field":"*","fields":[],"type":"best_fields","default_operator":"or","max_determinized_states":10000,"enable_position_increments":true,"fuzziness":"AUTO","fuzzy_prefix_length":0,"fuzzy_max_expansions":50,"phrase_slop":0,"analyze_wildcard":true,"escape":false,"auto_generate_synonyms_phrase_query":true,"fuzzy_transpositions":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"query_string":{"query":"-system.network.name:l*","fields":[],"type":"best_fields","default_operator":"or","max_determinized_states":10000,"enable_position_increments":true,"fuzziness":"AUTO","fuzzy_prefix_length":0,"fuzzy_max_expansions":50,"phrase_slop":0,"analyze_wildcard":true,"escape":false,"auto_generate_synonyms_phrase_query":true,"fuzzy_transpositions":true,"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"aggregations":{"0c761591-1b92-11e7-bec4-a5e9ec5cab8b":{"meta":{"timeField":"@timestamp","intervalString":"600s","bucketSize":600},"terms":{"field":"system.network.name","size":10,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]},"aggregations":{"timeseries":{"date_histogram":{"field":"@timestamp","time_zone":"America/Denver","interval":"600s","offset":0,"order":{"_key":"asc"},"keyed":false,"min_doc_count":0,"extended_bounds":{"min":1548992494872,"max":1549078894872}},"aggregations":{"0c761592-1b92-11e7-bec4-a5e9ec5cab8b":{"max":{"field":"system.network.in.bytes"}},"1d659060-1b92-11e7-bec4-a5e9ec5cab8b":{"derivative":{"buckets_path":["0c761592-1b92-11e7-bec4-a5e9ec5cab8b"],"gap_policy":"skip","unit":"1s"}},"f2074f70-1b92-11e7-a416-41f5ccdba2e6":{"bucket_script":{"buckets_path":{"value":"1d659060-1b92-11e7-bec4-a5e9ec5cab8b[normalized_value]"},"script":{"source":"params.value > 0.0 ? params.value : 0.0","lang":"painless"},"gap_policy":"skip"}}}}}}}}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [8dMqAA8][10.128.0.2:9300][indices:data/read/search[phase/query]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [system.network.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.