After enabling the IIS Module in Metricbeat 7.8 (Windows Server 2019 Standard), I am continuously getting errors in the Metricbeat log file:
|2020-07-08T20:35:20.831-0700|ERROR|[website]|application_pool/reader.go:94|There is more data to return than will fit in the supplied buffer. Allocate a larger buffer and call the function again.failed to expand counter path (query="%v")\Process(w3wp*)\IO Read Operations/sec|
|2020-07-08T20:35:21.030-0700|ERROR|[website]|application_pool/reader.go:94|There is more data to return than will fit in the supplied buffer. Allocate a larger buffer and call the function again.failed to expand counter path (query="%v")\Process(w3wp*)\Handle Count|
|2020-07-08T20:35:21.099-0700|ERROR|[website]|application_pool/reader.go:94|There is more data to return than will fit in the supplied buffer. Allocate a larger buffer and call the function again.failed to expand counter path (query="%v")\Process(w3wp*)\ID Process|
etc.
Any idea, why the module fails? The IIS Module configuration is standard.
#========================== Modules configuration ============================
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true
# Period on which files under path should be checked for changes
reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# queue.spool:
# file:
# path: "${path.data}/spool.dat"
# size: 512MiB
# page_size: 16KiB
# write:
# buffer_size: 10MiB
# flush.timeout: 5s
# flush.events: 1024
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "https://vmpdapxbur.abc.com:5601"
ssl.enabled: true
ssl.certificate_authorities: C:\ProgramData\Elastic\Beats\metricbeat\certs\ca.crt
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://vmpdapxbur.abc.com:9200"] ## Monitoring cluster
pipeline: "metricbeatpipeline"
ssl.certificate_authorities: C:\ProgramData\Elastic\Beats\metricbeat\certs\ca.crt
# Optional protocol and basic auth credentials.
protocol: "https"
username: "elastic"
# username: "BeatIt"
# Read PW from metricbeat.keystore
# password: "${BeatIt_PWD}"
password: "supersecret"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== X-pack Monitoring ===============================
# metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
http.enabled: true
http.port: 5070
# Set to true to enable the monitoring reporter.
monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
monitoring.cluster_uuid: "I_Dkry_vQ_6peVGUY0Rahw"
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
It only happens with 64 bit versions ( metricbeat-7.8.0-windows-x86_64.zip and metricbeat-7.8.0-windows-x86_64.msi), while the 32 bit versions (metricbeat-7.8.0-windows-x86.msi and metricbeat-7.8.0-windows-x86.zip) does not throw the errors.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.