Filebeat service wont start even after 'test output' and 'setup -e' shows positive results

Hello community! ,

I am quite new to filebeat and logstash and I am looking to append logs from a node application into kibana using filebeat and logstash.

I have tried the following steps from the documentation provided by the kibana console itself and also the below link,

filebeat config doc

filbeat - node doc

After following these steps I am unable to start the service. Please find my setup below,

filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  #paths:
    #- 
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  username: "filebeat_internal"
  password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

filebeat setup -e

2023-01-04T08:42:25.580+0530	INFO	instance/beat.go:607	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2023-01-04T08:42:25.581+0530	INFO	instance/beat.go:615	Beat ID: 1892cb3b-a5d9-4fba-8899-04b6127f8859
2023-01-04T08:42:25.583+0530	INFO	[beat]	instance/beat.go:903	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "1892cb3b-a5d9-4fba-8899-04b6127f8859"}}}
2023-01-04T08:42:25.583+0530	INFO	[beat]	instance/beat.go:912	Build info	{"system_info": {"build": {"commit": "f940c36884d3749901a9c99bea5463a6030cdd9c", "libbeat": "7.4.0", "time": "2019-09-27T07:45:44.000Z", "version": "7.4.0"}}}
2023-01-04T08:42:25.583+0530	INFO	[beat]	instance/beat.go:915	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.12.9"}}}
2023-01-04T08:42:25.585+0530	INFO	[beat]	instance/beat.go:919	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2023-01-04T08:23:29+05:30","containerized":false,"name":"sc-hari--HP-ZBook-15","ip":["127.0.0.1/8","::1/128","192.168.0.102/24","2406:7400:63:efbd:d338:1b51:a981:afc0/64","2406:7400:63:efbd:63c9:dc89:abb5:d272/64","fe80::8fee:4840:2c80:df90/64","172.17.0.1/16","172.18.0.1/16","172.19.0.1/16","172.20.0.1/16","172.22.0.1/16","fe80::42:eeff:fe14:f1fa/64","fe80::80e3:4eff:fe9e:cc1d/64","fe80::58e5:7aff:fe21:9189/64","fe80::e455:baff:fe3f:adf4/64","fe80::a82a:66ff:fe1e:a02c/64"],"kernel_version":"5.15.0-56-generic","mac":["38:63:bb:c7:9b:51","80:00:0b:44:69:3e","02:42:95:36:33:1c","02:42:e9:d5:53:00","02:42:77:f1:69:42","02:42:75:32:82:df","02:42:ee:14:f1:fa","82:e3:4e:9e:cc:1d","5a:e5:7a:21:91:89","e6:55:ba:3f:ad:f4","aa:2a:66:1e:a0:2c"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"22.04.1 LTS (Jammy Jellyfish)","major":22,"minor":4,"patch":1,"codename":"jammy"},"timezone":"IST","timezone_offset_sec":19800,"id":"b6280841c18c4d43b98665eb7a615b58"}}}
2023-01-04T08:42:25.586+0530	INFO	[beat]	instance/beat.go:948	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/etc/filebeat", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 7413, "ppid": 7412, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2023-01-04T08:42:24.390+0530"}}}
2023-01-04T08:42:25.586+0530	INFO	instance/beat.go:292	Setup Beat: filebeat; Version: 7.4.0
2023-01-04T08:42:25.586+0530	INFO	[index-management]	idxmgmt/std.go:178	Set output.elasticsearch.index to 'filebeat-7.4.0' as ILM is enabled.
2023-01-04T08:42:25.588+0530	INFO	elasticsearch/client.go:170	Elasticsearch url: http://localhost:9200
2023-01-04T08:42:25.589+0530	INFO	[publisher]	pipeline/module.go:97	Beat name: sc-hari--HP-ZBook-15
2023-01-04T08:42:25.596+0530	INFO	elasticsearch/client.go:170	Elasticsearch url: http://localhost:9200
2023-01-04T08:42:25.614+0530	INFO	elasticsearch/client.go:743	Attempting to connect to Elasticsearch version 7.4.0
2023-01-04T08:42:25.644+0530	INFO	[index-management]	idxmgmt/std.go:252	Auto ILM enable success.
2023-01-04T08:42:25.679+0530	INFO	[index-management]	idxmgmt/std.go:265	ILM policy successfully loaded.
2023-01-04T08:42:25.679+0530	INFO	[index-management]	idxmgmt/std.go:394	Set setup.template.name to '{filebeat-7.4.0 {now/d}-000001}' as ILM is enabled.
2023-01-04T08:42:25.679+0530	INFO	[index-management]	idxmgmt/std.go:399	Set setup.template.pattern to 'filebeat-7.4.0-*' as ILM is enabled.
2023-01-04T08:42:25.679+0530	INFO	[index-management]	idxmgmt/std.go:433	Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.4.0 {now/d}-000001} as ILM is enabled.
2023-01-04T08:42:25.679+0530	INFO	[index-management]	idxmgmt/std.go:437	Set settings.index.lifecycle.name in template to {filebeat-7.4.0 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2023-01-04T08:42:25.683+0530	INFO	template/load.go:169	Existing template will be overwritten, as overwrite is enabled.
2023-01-04T08:42:25.751+0530	INFO	template/load.go:108	Try loading template filebeat-7.4.0 to Elasticsearch
2023-01-04T08:42:25.839+0530	INFO	template/load.go:100	template with name 'filebeat-7.4.0' loaded.
2023-01-04T08:42:25.839+0530	INFO	[index-management]	idxmgmt/std.go:289	Loaded index template.
2023-01-04T08:42:25.842+0530	INFO	[index-management]	idxmgmt/std.go:300	Write alias successfully generated.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-01-04T08:42:25.842+0530	INFO	kibana/client.go:117	Kibana url: http://localhost:5601
2023-01-04T08:42:25.979+0530	INFO	kibana/client.go:117	Kibana url: http://localhost:5601
2023-01-04T08:42:28.584+0530	INFO	add_cloud_metadata/add_cloud_metadata.go:87	add_cloud_metadata: hosting provider type not detected.
2023-01-04T08:43:15.287+0530	INFO	instance/beat.go:777	Kibana dashboards successfully loaded.
Loaded dashboards
2023-01-04T08:43:15.288+0530	INFO	elasticsearch/client.go:170	Elasticsearch url: http://localhost:9200
2023-01-04T08:43:15.294+0530	INFO	elasticsearch/client.go:743	Attempting to connect to Elasticsearch version 7.4.0
2023-01-04T08:43:15.344+0530	INFO	kibana/client.go:117	Kibana url: http://localhost:5601
2023-01-04T08:43:15.384+0530	WARN	fileset/modules.go:419	X-Pack Machine Learning is not enabled
2023-01-04T08:43:15.414+0530	WARN	fileset/modules.go:419	X-Pack Machine Learning is not enabled
Loaded machine learning job configurations
2023-01-04T08:43:15.414+0530	INFO	elasticsearch/client.go:170	Elasticsearch url: http://localhost:9200
2023-01-04T08:43:15.417+0530	INFO	elasticsearch/client.go:743	Attempting to connect to Elasticsearch version 7.4.0
2023-01-04T08:43:15.439+0530	INFO	elasticsearch/client.go:170	Elasticsearch url: http://localhost:9200
2023-01-04T08:43:15.442+0530	INFO	elasticsearch/client.go:743	Attempting to connect to Elasticsearch version 7.4.0
2023-01-04T08:43:15.486+0530	INFO	fileset/pipelines.go:134	Elasticsearch pipeline with ID 'filebeat-7.4.0-logstash-log-pipeline-plain' loaded
2023-01-04T08:43:15.486+0530	INFO	cfgfile/reload.go:264	Loading of config files completed.
2023-01-04T08:43:15.486+0530	INFO	[load]	cfgfile/list.go:118	Stopping 1 runners ...
Loaded Ingest pipelines

type or paste code here

filebeat test output

elasticsearch: http://localhost:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 127.0.0.1
    dial up... OK
  TLS... WARN secure connection disabled
  talk to server... OK
  version: 7.4.0

filebeat service status

× filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
     Loaded: loaded (/lib/systemd/system/filebeat.service; disabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2023-01-04 08:43:57 IST; 1s ago
       Docs: https://www.elastic.co/products/beats/filebeat
    Process: 7572 ExecStart=/usr/share/filebeat/bin/filebeat $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited, status=2)
   Main PID: 7572 (code=exited, status=2)
        CPU: 44ms

Jan 04 08:43:57 sc-hari--HP-ZBook-15 filebeat[7572]: rip    0x7fb400537a7c
Jan 04 08:43:57 sc-hari--HP-ZBook-15 filebeat[7572]: rflags 0x246
Jan 04 08:43:57 sc-hari--HP-ZBook-15 filebeat[7572]: cs     0x33
Jan 04 08:43:57 sc-hari--HP-ZBook-15 filebeat[7572]: fs     0x0
Jan 04 08:43:57 sc-hari--HP-ZBook-15 filebeat[7572]: gs     0x0
Jan 04 08:43:57 sc-hari--HP-ZBook-15 systemd[1]: filebeat.service: Scheduled restart job, restart counter is at 5.
Jan 04 08:43:57 sc-hari--HP-ZBook-15 systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Jan 04 08:43:57 sc-hari--HP-ZBook-15 systemd[1]: filebeat.service: Start request repeated too quickly.
Jan 04 08:43:57 sc-hari--HP-ZBook-15 systemd[1]: filebeat.service: Failed with result 'exit-code'.
Jan 04 08:43:57 sc-hari--HP-ZBook-15 systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..

Environment :

  • filebeat version : 7.4.0 (amd64), libbeat 7.4.0 [f940c36884d3749901a9c99bea5463a6030cdd9c built 2019-09-27 07:45:44 +0000 UTC]
  • Elasticsearch version : 7.4.0
  • logstash : not installed as the document did not instrcut me to install.
  • Ubuntu : 22 LTS Jammy

Hope these details are sufficient. Please do let me know if for any logs that are required.

Elasticsearch version 7.4 is EOL and no longer supported. Please upgrade ASAP.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

Hi @Hari_Krishnan1 welcome to the community.

If that is your full filebeat.yml...
No inputs are enabled so there is nothing to start, so filebeat exits.

You can check the logs with

journalctl -u filebeat.service

Also you need to upgrade to a newer version as be as matter of urgency

This is about beats, but I have seen confusion about this with logstash a few times. Sometimes folks want to start with a minimal configuration and gradually build out, but their minimal configuration is too small for things to work, so they get stuck at step one.

Perhaps the documentation should mention that the configuration has to process events/docs so that it shouldn't shut down without doing any work?

To me (having spent a lot of time trying to understand this behaviour[1]) it is sometimes obvious and sometimes incomprehensible.

1 -- Is a schedule a pre-requisite for a jdbc input to not shut down the pipeline?

Thanks for welcoming!

I have done the following as per my understanding ,

  • I have set it to enabled and mentioned my log path (*.log )
  • again ran setup
  • still shows the same

Also FYI here is my understanding for the log flow,

log files <-- filebeat --> Elasticsearch <--- kibana

Find the below logs as requested :

journalctl -u filebeat.service

Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: runtime/cgo: pthread_create failed: Operation not permitted
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: SIGABRT: abort
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: PC=0x7f6387b6ca7c m=10 sigcode=18446744073709551610
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: goroutine 0 [idle]:
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: runtime: unknown pc 0x7f6387b6ca7c
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: stack: frame={sp:0x7f637d619820, fp:0x0} stack=[0x7f637ce1a1e8,0x7f637d619de8)
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619720:  0000000000000000  00007f637d619750
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619730:  00000000012c01ad <runtime.scanstack.func1+61>  00007f637d6199d0
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619740:  00007f637d619ae0  000000c000068670
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619750:  00007f637d619a28  00000000012b937c <runtime.gentraceback+5020>
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619760:  00007f637d6199d0  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619770:  00000000012c4a01 <runtime.aeshash32+1>  00007f637d619840
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619780:  0000000000000000  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619790:  0000000000000000  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197a0:  0000000000000001  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197b0:  0000000000000000  0000000000000130
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197c0:  0000000000000000  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197d0:  0000001300000000  0000000000000120
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197e0:  000000c0003d27d0  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6197f0:  0000000000000000  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619800:  0000000000000004  0000003400000013
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619810:  0000000000000000  00007f6387b6ca6e
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619820: <0000000000000000  000000770000007c
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619830:  0000005b0000006e  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619840:  00000000012c4ad1 <runtime.goexit+1>  00007f6387bfca51
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619850:  00007f635b7fe640  00007f637d619b30
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619860:  00007f637d6199ae  00007f637d6199af
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619870:  0000000000000000  00007f6387b6a759
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619880:  00000000007fff00  0000000000000000
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619890:  00000000003d0f00  00007f635b7fe910
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198a0:  00007f635b7fe910  c3e7421e53c32600
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198b0:  00007f637d61a640  0000000000000006
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198c0:  00000000067700c0  0000000000000011
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198d0:  00000000033348e8  00007f6387b18476
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198e0:  00007f6387cf0e90  00007f6387afe7f3
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d6198f0:  0000000000000020  00007f635b7fe640
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619900:  0000000000000000  0000000000000001
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: 00007f637d619910:  00007f635b7fe640  00007f6387b6b5c4
Jan 02 22:31:51 sc-hari--HP-ZBook-15 filebeat[47356]: runtime: unknown pc 0x7f6387b6ca7c

Hope these details are suffucient

One thing You are not referencing the correct docs.. you are looking at 7.17 (which you should use) but your say you are using 7.4 so you should use the correct docs

Also you can tail the journalctl with -f .. I think the interesting lines was before what you showed.

You could also set

logging.level: debug

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.