Heartbeath - error: Please make sure that multiple beats are not sharing the same data path (path.data)

Hello to all,

I have installed on my ubuntu server the module Heartbeat for monitoring the healthy of infrastructure server, by the way i'm facing with the issue:

Please make sure that multiple beats are not sharing the same data path (path.data)

I have already tried to restart the services in order:
Elastic
Kibana
Filebeat
Metricbeat
Heartbeat

Also i already tried to kill the process and restart the service.
Also tried to cancel the file.lock and then restart the heartbeat-elastic.service, but the file.lock get generated again.

Could you please help me to solve this issue?

Heartbeat version:

heartbeat version 8.9.0 (amd64), libbeat 8.9.0 [dd50d49baeb99e0d21a31adb621908a7f0091046 built 2023-07-19 01:29:38 +0000 UTC] 

Result of command Heartbeat -e:

{"log.level":"info","@timestamp":"2023-07-27T09:28:58.447+0200","log.origin":{"file.name":"instance/beat.go","file.line":779},"message":"Home path: [/usr/share/heartbeat] Config path: [/etc/heartbeat] Data path: [/var/lib/heartbeat] Logs path: [/var/log/heartbeat]","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:58.447+0200","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":870},"message":"Beat metadata path: /var/lib/heartbeat/meta.json","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-07-27T09:28:58.452+0200","log.origin":{"file.name":"instance/beat.go","file.line":787},"message":"Beat ID: f8816ca1-bde2-49e6-9a04-09d7ae79075e","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:58.454+0200","log.logger":"processors","log.origin":{"file.name":"processors/processor.go","file.line":114},"message":"Generated new processors: add_observer_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]]","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:58.454+0200","log.origin":{"file.name":"locks/lock.go","file.line":79},"message":"Could not obtain lock for file /var/lib/heartbeat/heartbeat.lock, retrying 4 times","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:58.855+0200","log.origin":{"file.name":"locks/lock.go","file.line":79},"message":"Could not obtain lock for file /var/lib/heartbeat/heartbeat.lock, retrying 3 times","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:59.256+0200","log.origin":{"file.name":"locks/lock.go","file.line":79},"message":"Could not obtain lock for file /var/lib/heartbeat/heartbeat.lock, retrying 2 times","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2023-07-27T09:28:59.657+0200","log.origin":{"file.name":"locks/lock.go","file.line":79},"message":"Could not obtain lock for file /var/lib/heartbeat/heartbeat.lock, retrying 1 times","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-07-27T09:29:00.057+0200","log.origin":{"file.name":"instance/beat.go","file.line":426},"message":"heartbeat stopped.","service.name":"heartbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-07-27T09:29:00.057+0200","log.origin":{"file.name":"instance/beat.go","file.line":1274},"message":"Exiting: /var/lib/heartbeat/heartbeat.lock: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data)","service.name":"heartbeat","ecs.version":"1.6.0"}
Exiting: /var/lib/heartbeat/heartbeat.lock: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data)```

And Heartbeat.yml:

################### Heartbeat Configuration Example #########################

# This file is an example configuration file highlighting only some common options.
# The heartbeat.reference.yml file in the same directory contains all the supported options
# with detailed comments. You can use it for reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/heartbeat/index.html

############################# Heartbeat ######################################

# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
heartbeat.config.monitors:
  # Directory + glob pattern to search for configuration files
  path: ${path.config}/monitors.d/*.yml
  # If enabled, heartbeat will periodically check the config.monitors path for changes
  reload.enabled: false
  # How often to check for changes
  reload.period: 5s

# Configure monitors inline
heartbeat.monitors:
- type: https
  # Set enabled to true (or delete the following line) to enable this example monitor
  enabled: false
  # ID used to uniquely identify this monitor in elasticsearch even if the config changes
  id: my-monitor
  # Human readable display name for this service in Uptime UI and elsewhere
  name: My Monitor
  # List or urls to query
  urls: ["<myip>"]
  # Configure task schedule
  schedule: '@every 10s'
  # Total test connection and data exchange timeout
  #timeout: 16s
  # Name of corresponding APM service, if Elastic APM is in use for the monitored service.
  #service.name: my-apm-service-name

# Experimental: Set this to true to run heartbeat monitors exactly once at startup
#heartbeat.run_once: true

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
setup.ilm.overwrite: true

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:8443"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Heartbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: [<"ip:9200">]
  username: "elastic"
  password: "*********************"
  protocol: https
  ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
  ssl.key: /etc/elasticsearch/certs/elasticsearch.key
  ssl.certificate_authorities: /etc/elasticsearch/certs/ca/ca.crt
  ssl.verification_mode: none

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:
  - add_observer_metadata:
      # Optional, but recommended geo settings for the location Heartbeat is running in
      #geo:
        # Token describing this location
        #name: us-east-1a
        # Lat, Lon "
        #location: "37.926868, -78.024902"


# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
#logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/heartbeat
  name: heartbeat
  keepfiles: 7
  permissions: 0640

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Heartbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

#Set to true to enable the monitoring reporter.
  monitoring.enabled: true
  username: "elastic"
  password: "********************"
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Heartbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the heartbeat.
#instrumentation:
    # Set to true to enable instrumentation of heartbeat.
    #enabled: false

    # Environment in which heartbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true```

Current version of heartbeat is: 7.17.9

If i try to check if it is working from the "uptimeMonitors" i got the error:

No data has been received from Heartbeat yet

Hi @yari_arcopinto,

Is the error Please make sure that multiple beats... seen on systemd logs or when manually trying to execute heartbeat?

It seems that there might be two instances of heartbeat trying to use the same data path as base, most likely one of them being systemd managed process. If you want to manually execute heartbeat in a box that is already running it as a service, you'll need to specify a different data path so that they don't interfere with each other.
Please note that running multiple instances of heartbeat will produce multiple monitor runs. On systemd installs, the recommendation is to edit the provided heartbeat.yml and restart the service to load the changes.

As a side note, No data has been received from Heartbeat yet is displayed in Kibana until it receives at least one monitor execution result. You'll need to specify at least one monitor to run inside heartbeat.yml for it to go away.

Hope this helps!

Hello @emilioalvap !

Thanks for your support in advance.

By the way, i already tried to change the path.data in heartbeat.yml for example:

From /var/lib/heartbeat to /var/lib/heartbit-elasticsearch with the following steps:

  • create the new path /var/lib/heartbit-elasticsearch

  • adding the path.data: /var/lib/heartbit-elasticsearch to heartbeat.yml

  • removing with rm -r the folder /var/lib/heartbeat

But when i have restarted heartbeat using systemctl restart heartbeat-elastic.service and then systemctl status hearbeat-elastic.service, i have see that tha path.data wasn't change

heartbeat-elastic.service - Ping remote services for availability and log results to Elasticsearch or send to Logstash.
   Loaded: loaded (/lib/systemd/system/heartbeat-elastic.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2023-07-28 13:02:30 CEST; 3h 44min ago
     Docs: https://www.elastic.co/beats/heartbeat
 Main PID: 30390 (heartbeat)
    Tasks: 9 (limit: 4915)
   CGroup: /system.slice/heartbeat-elastic.service
           └─30390 /usr/share/heartbeat/bin/heartbeat --environment systemd -c /etc/heartbeat/heartbeat.yml --path.home /usr/share/heartbeat --path.config /etc/heartbeat --path.data /var/lib/heartbeat --path.logs /var/log/heartbeat

and that the heartbeat has generated again the folder /heartbeat at the path /var/lib/hearbeat.

So i wasn't able to figure out. I have the same issue with filebeat that show the same error.

Command: filebeat -e
2023-07-28T16:50:12.392+0200    INFO    instance/beat.go:697    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] Hostfs Path: [/]
2023-07-28T16:50:12.393+0200    INFO    instance/beat.go:705    Beat ID: 49e49bf7-b7ea-4313-a060-45e63db0cdaa
2023-07-28T16:50:12.394+0200    WARN    [cfgwarn]       template/config.go:88   DEPRECATED: Please migrate your JSON templates from legacy template format to composable index template. Will be removed in version: 8.0.0
2023-07-28T16:50:12.394+0200    INFO    instance/beat.go:390    filebeat stopped.
2023-07-28T16:50:12.394+0200    ERROR   instance/beat.go:1026   Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).
Exiting: data path already locked by another beat. Please make sure that multiple beats are not sharing the same data path (path.data).

Regarding the process here you can see that there aren't duplicate process:

Commad: ps aux | grep -i beat 
root     10593  0.3  1.6 1348836 101856 ?      Ssl  16:33   0:03 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
root     16931  0.0  0.0  13140  1052 pts/0    S+   16:49   0:00 grep --color=auto -i beat
root     22948  0.9  0.7 1187968 44680 ?       Ssl  Jul27  15:35 /usr/share/metricbeat/bin/metricbeat -e -c /etc/metricbeat/metricbeat.yml -path.home /usr/share/metricbeat -path.config /etc/metricbeat -path.data /var/lib/metricbeat -path.logs /var/log/metricbeat
root     30390  0.1  0.5 1406140 32204 ?       Ssl  13:02   0:16 /usr/share/heartbeat/bin/heartbeat --environment systemd -c /etc/heartbeat/heartbeat.yml --path.home /usr/share/heartbeat --path.config /etc/heartbeat --path.data /var/lib/heartbeat --path.logs /var/log/heartbeat

Thanks in advance!

Hi @yari_arcopinto,

Thanks for providing the additional information.

If you look into the command that systemctl is using to launch heartbeat/filebeat, it's overriding the data path specified in heartbeat.yml:

CGroup: /system.slice/heartbeat-elastic.service
           └─30390 /usr/share/heartbeat/bin/heartbeat ... --path.data /var/lib/heartbeat <-- This overrides heartbeat.yml option

CLI options usually take precedence over .yml file. You'll likely need to edit this option in the <beat>-elastic.service template.

Hello @emilioalvap ,

Where i can find the < beat >-elastic.service template?

I'm able to find the elastic.service under /usr/lib/systemd/system/ , but i haven't < beat >-elastic.service
Best regards,

Hi @yari_arcopinto,

It should be possible to edit the config file by issuing:

systemctl edit heartbeat-elastic.service

From the logs you provided, it should be here:

 Loaded: loaded (/lib/systemd/system/heartbeat-elastic.service; enabled; vendor preset: enabled)

Please check our guide on editing systemd config files.

Hi @emilioalvap ,

Thanks!

Right now, i have fixed it for Heartbeat and Filebeat too, but after changed the Filbeat path.data into filbeat-elastic.service, i'm getting back this error:

2023-07-31T15:03:17.701+0200    ERROR   instance/beat.go:1026   Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response: .
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response:

But in the filebeat.yml i haven't set any address related to kibana.

Filebeat.yml

output.elasticsearch.hosts: ["***.**.**.21:9200"]
output.elasticsearch.password: *********************

filebeat.modules:
  - module: wazuh
    alerts:
      enabled: true
    archives:
      enabled: true

setup.template.json.enabled: true
setup.template.json.path: /etc/filebeat/wazuh-template.json
setup.template.json.name: wazuh
setup.template.overwrite: true
setup.ilm.enabled: false

output.elasticsearch.protocol: https
output.elasticsearch.ssl.certificate: /etc/elasticsearch/certs/elasticsearch.crt
output.elasticsearch.ssl.key: /etc/elasticsearch/certs/elasticsearch.key
output.elasticsearch.ssl.certificate_authorities: /etc/elasticsearch/certs/ca/ca.crt
output.elasticsearch.ssl.verification_mode: none
output.elasticsearch.username: *****

logging.metrics.enabled: false


path.home: /usr/share/filebeat
path.config: /etc/filebeat
path.data: /var/lib/filebeat
path.logs: /var/log/filebeat

filebeat.registry.path: ${path.data}/registry

seccomp:
  default_action: allow
  syscalls:
  - action: allow
    names:
    - rseq

This is my Kibana.yml

server.host: 0.0.0.0
server.port: 8443
elasticsearch.hosts: https://***.**.**.21:9200
elasticsearch.password: *****************

# Elasticsearch from/to Kibana

elasticsearch.ssl.certificateAuthorities: /etc/kibana/certs/ca/ca.crt
elasticsearch.ssl.certificate: /etc/kibana/certs/kibana.crt
elasticsearch.ssl.key: /etc/kibana/certs/kibana.key

# Browser from/to Kibana
server.ssl.enabled: false
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key

# Elasticsearch authentication
xpack.security.enabled: true
elasticsearch.username: ********
uiSettings.overrides.defaultRoute: "/app/wazuh"
elasticsearch.ssl.verificationMode: none
telemetry.banner: false

Have i to set it now?

Best :slight_smile:

Little update.

The issue related to filebeat has been solved.

I have just added to filebeat.yml the following:

setup.kibana:
  host: "localhost:8443"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.