OS metrics

Hi There,

I managed to get data into elasticsearch via logstash. The data contains OS metrics. Now that we have the data I am trying to make sense of it. We are getting raw data in but we want to see everything in percentages and maybe even realtime.

For example we would want to see the CPU usage real time. Can someone help us to transform the data in to useable visualisations. Because now we just get random numbers and we can't make anything of it.

Thanks in advance.

Hello Pierre,

We have built in dashboards which give you exactly what you need for your use case. Can you please check out metricbeats and install them?


I am also moving your question over to beats forum. They will be able to help if you run into any problems.



Sorry i forgot to tell that we are using metricbeat to send the data through logstash

So beats dashboard should help you with this: https://www.elastic.co/guide/en/beats/metricbeat/current/load-kibana-dashboards.html

But it seems that we have to send the data directly to kibana, but we want to send it trough logstash and use elasticsearch. For if we want to look data up

Can anyone help me with this, We want to create usefull dashboards, but we want the data to travel through logstash


You can't send data directly to kibana. You can send data directly to elasticsaerch or first logstash then logstash send data to elasticsaerch.

Kibana always fetch the data from elasticsearch based on your query. So all data stored at elasticsearch. If you want to monitor the resources of your servers like CPU, RAM, Disk etc you can use metricbeat. Metricbeat have own dashboard in kibana where you can see the data of CPU, RAM etc.



I managed to get data of 1 server. Now with the second server i got the following error while it is the same .yml file

2019-10-16T08:47:06.588+0200 ERROR instance/beat.go:878 Exiting: 1 error: 1 error: could not read \proc: CreateFile \proc: The system cannot find the file specified.
Exiting: 1 error: 1 error: could not read \proc: CreateFile \proc: The system cannot find the file specified.

Can you help me?


Review your metricbeat.yml again or provide so we can check.

#==========================  Modules configuration =============================

#-------------------------------- System Module --------------------------------
- module: system
- cpu             # CPU usage
- load            # CPU load averages
- memory          # Memory usage
- network         # Network IO
- process         # Per process metrics
- process_summary # Process summary
- uptime          # System Uptime
- socket_summary  # Socket summary
- core           # Per CPU core usage
- diskio         # Disk IO
- filesystem     # File system usage for each mountpoint
- fsstat         # File system summary metrics
- raid           # Raid
# - socket         # Sockets and connection info (linux only)
  enabled: true
  period: 10s
  processes: ['.*']

  # Configure the metric types that are included by these metricsets.
  cpu.metrics:  ["normalized_percentages"]  # The other available options are normalized_percentages and ticks.
  core.metrics: ["normalized_percentages"]  # The other available option is ticks.

  # These options allow you to filter out all processes that are not
  # in the top N by CPU or memory, in order to reduce the number of documents created.
  # If both the `by_cpu` and `by_memory` options are used, the union of the two sets
  # is included.

# Set to false to disable this feature and include all processes
#enabled: true

# How many processes to include from the top by CPU. The processes are sorted
# by the `system.process.cpu.total.pct` field.
#by_cpu: 0

# How many processes to include from the top by memory. The processes are sorted
# by the `system.process.memory.rss.bytes` field.
#by_memory: 0

  # If false, cmdline of a process is not cached.
  #process.cmdline.cache.enabled: true

  # Enable collection of cgroup metrics from processes on Linux.
  #process.cgroups.enabled: true

  # A list of regular expressions used to whitelist environment variables
  # reported with the process metricset's events. Defaults to empty.
  #process.env.whitelist: []

  # Include the cumulative CPU tick values with the process metrics. Defaults
  # to false.
  #process.include_cpu_ticks: false

  # Raid mount point to monitor
  #raid.mount_point: '/'

  # Configure reverse DNS lookup on remote IP addresses in the socket metricset.
  #socket.reverse_lookup.enabled: false
  #socket.reverse_lookup.success_ttl: 60s
  #socket.reverse_lookup.failure_ttl: 60s

  # Diskio configurations
  #diskio.include_devices: []
  # Glob pattern for configuration loading
 # path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  #reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s
- module: windows
   - service
  period: 1m

- module: windows
- perfmon
  period: 10s
- instance_label: processor.name
  instance_name: total
  measurement_label: processor.time.total.pct
  query: '\Processor Information(_Total)\% Processor Time'

- instance_label: physical_disk.name
  measurement_label: physical_disk.write.per_sec
  query: '\PhysicalDisk(*)\Disk Writes/sec'

- instance_label: physical_disk.name
  measurement_label: physical_disk.write.time.pct
  query: '\PhysicalDisk(*)\% Disk Write Time'

#==================== Elasticsearch template setting ==========================

  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
  hosts: [""]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

  - add_host_metadata: ~
  - add_cloud_metadata: ~


Please provide in formatted way. Nobody can understand it because its format and meaning of some character is changed.

edited the post

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.