Logs does not show in kibana

Hi - I am having a little trouble pulling the logs to my Elasticsearch; currently, as I am on the main page, from the Discover menu , nothing shows up. I have Winlogbeat and sysmon installed as a service on my Windows machine; whist has audit beat + filebeat installed on my Linux machine. All three options show up for me to choose from the index pattern. However, nothing is being pulled. I ran auditbeat setup -e and winlogbeat setup -e , and both went well .

From Linux Machine: $auditbeat setup -e

successfully loaded.
Loaded dashboards

Elasticsearch Stack Server running from Kali Linux machine, on bridged mode Networking.

If needed, i can provide configuration file of each elasticserver, filebeat, auditbeat from Linux machine.

At the same time, just wanted to emphasize that all of these beats are working on the same port 192.168.2.18:5601.

Could someone also guide me on how to use SSL certificate for HTTP and TLS , using this post from medium; but elastic certutil is not allowing me to create a .pem for kibana, logstash, and elasticsearch.URL : Configuring SSL, TLS, and HTTPS to secure Elastic Stack (Single-Node) | by Steven Audy | sera-engineering | Medium

Welcome to our community! :smiley:

What is the output from GET _cat/indices?v in Dev Tools?

Thanks again for the prompt response,here is the GET _ cat/indices?v from command prompt :


C:\Program Files\Winlogbeat>curl -X GET "http://192.168.2.18:9200/_cat/indices?v"
health status index                                  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .geoip_databases                       HJ8nfNE6RQCRk3WtbTjoTg   1   0         43            3     43.8mb         43.8mb
yellow open   auditbeat-7.17.10-2023.05.22-000001    3xOnvyeUQMmNJwOdTZbrVQ   1   1          0            0       227b           227b
green  open   .apm-custom-link                       KipPBuAjQ-mqDhFsX0qx_w   1   0          0            0       227b           227b
yellow open   .ds-winlogbeat-8.8.0-2023.05.26-000001 KdgkJW3SQ5KBABE4tSpqMQ   1   1          0            0       227b           227b
green  open   .apm-agent-configuration               TAI3jvTATc2hokZ89gutvw   1   0          0            0       227b           227b
green  open   .kibana_7.17.10_001                    AdQVjk1TS9uaBK0KirCpug   1   0       3660          713      3.9mb          3.9mb
green  open   .async-search                          0wQb02e8QxiX1AV-CuL2uA   1   0          0            0     10.2kb         10.2kb
green  open   .kibana_task_manager_7.17.10_001       y_5eoUqzS8WfzXgCwqte8A   1   0         17        35481      4.4mb          4.4mb
yellow open   filebeat-7.17.10-2023.05.28-000001     DayMqCcQTYaTIHOZWVDAEQ   1   1          0            0       227b           227b
green  open   .tasks                                 gH9KsmsjS5CaEo-cv--AAQ   1   0          4            0     27.4kb         27.4kb

And this is from the Devtool :


#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-minimal-setup.html to enable security.
health status index                                  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   auditbeat-7.17.10-2023.05.22-000001    3xOnvyeUQMmNJwOdTZbrVQ   1   1          0            0       227b           227b
green  open   .geoip_databases                       HJ8nfNE6RQCRk3WtbTjoTg   1   0         43            3     43.8mb         43.8mb
green  open   .apm-custom-link                       KipPBuAjQ-mqDhFsX0qx_w   1   0          0            0       227b           227b
yellow open   .ds-winlogbeat-8.8.0-2023.05.26-000001 KdgkJW3SQ5KBABE4tSpqMQ   1   1          0            0       227b           227b
green  open   .kibana_7.17.10_001                    AdQVjk1TS9uaBK0KirCpug   1   0       3667          721      3.9mb          3.9mb
green  open   .apm-agent-configuration               TAI3jvTATc2hokZ89gutvw   1   0          0            0       227b           227b
green  open   .async-search                          0wQb02e8QxiX1AV-CuL2uA   1   0          0            4      3.5kb          3.5kb
green  open   .kibana_task_manager_7.17.10_001       y_5eoUqzS8WfzXgCwqte8A   1   0         17        36012      4.4mb          4.4mb
yellow open   filebeat-7.17.10-2023.05.28-000001     DayMqCcQTYaTIHOZWVDAEQ   1   1          0            0       227b           227b
green  open   .tasks                                 gH9KsmsjS5CaEo-cv--AAQ   1   0          4            0     27.4kb         27.4kb

Ok then you have indices created from Filebeat and Auditbeat and Winlogbeat but there's nothing in them.

What do the logs from each of these Beats show?

Nothing shows up, as indicated below for each of these beats.

Thank you for the assistance.

Here is for winlogbeat :

The same for Auditbeat ..

I mean the logs for the Beat process, not the processed ones but the ones they generate as they themselves are running.

How can i pull this up ?

You will need to be on the host that has the Beat installed and then it's usually under /var/log/$beatname.

At the same time as i have tried to change my elasticsearch.yml fle a little bit to accommodate http.ssl, since then i have not been able to connect the beats to my elasticsearch, and i am also going to post my elasticsearch.yml file down below :


└─# service elasticsearch start 

Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xeu elasticsearch.service" for details.


Job for elasticsearch.service failed because the control process exited with error code.

┌──(root㉿kali)-[/home/kali]

└─# service elasticsearch status   
× elasticsearch.service - LSB: Starts elasticsearch
     Loaded: loaded (/etc/init.d/elasticsearch; generated)
     Active: failed (Result: exit-code) since Tue 2023-05-30 10:37:40 EDT; 11s ago
       Docs: man:systemd-sysv-generator(8)
    Process: 334444 ExecStart=/etc/init.d/elasticsearch start (code=exited, status=1/FAILURE)
        CPU: 49ms

May 30 10:37:40 kali systemd[1]: Starting elasticsearch.service - LSB: Starts elasticsearch...
May 30 10:37:40 kali elasticsearch[334444]: The elasticsearch startup script does not exists or it is not executable, tried: /usr/share/elasticsearch/bin/elasticsearch
May 30 10:37:40 kali systemd[1]: elasticsearch.service: Control process exited, code=exited, status=1/FAILURE
May 30 10:37:40 kali systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
May 30 10:37:40 kali systemd[1]: Failed to start elasticsearch.service - LSB: Starts elasticsearch.
   

┌──(root㉿kali)-[/home/kali]

└─# service elasticsearch start 
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xeu elasticsearch.service" for details.


From our Linux Machine : 

└─# auditbeat setup -e               


2023-05-30T10:36:15.546-0400    INFO    instance/beat.go:698    Home path: [/usr/share/auditbeat] Config path: [/etc/auditbeat] Data path: [/var/lib/auditbeat] Logs path: [/var/log/auditbeat] Hostfs Path: [/]
2023-05-30T10:36:15.625-0400    INFO    instance/beat.go:706    Beat ID: e68758c6-798e-4329-8ef5-33017a1c862b
2023-05-30T10:36:18.649-0400    WARN    [add_cloud_metadata]    add_cloud_metadata/provider_aws_ec2.go:79    read token request for getting IMDSv2 token returns empty: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.
2023-05-30T10:36:18.892-0400    INFO    [beat]  instance/beat.go:1052   Beat info       {"system_info": {"beat": {"path": {"config": "/etc/auditbeat", "data": "/var/lib/auditbeat", "home": "/usr/share/auditbeat", "logs": "/var/log/auditbeat"}, "type": "auditbeat", "uuid": "e68758c6-798e-4329-8ef5-33017a1c862b"}}}
2023-05-30T10:36:18.893-0400    INFO    [beat]  instance/beat.go:1061   Build info      {"system_info": {"build": {"commit": "78a342312954e587301b653093954ff7ee4d4f2b", "libbeat": "7.17.10", "time": "2023-04-23T08:09:56.000Z", "version": "7.17.10"}}}
2023-05-30T10:36:18.893-0400    INFO    [beat]  instance/beat.go:1064   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.19.7"}}}
2023-05-30T10:36:18.893-0400    INFO    [beat]  instance/beat.go:1070   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2023-05-29T17:55:35-04:00","containerized":false,"name":"kali","ip":["127.0.0.1","::1","192.168.2.18","fe80::a00:27ff:feb1:9d67","fe80::a00:27ff:feda:76e8","172.17.0.1","172.20.0.1","172.18.0.1"],"kernel_version":"6.1.0-kali7-amd64","mac":["08:00:27:b1:9d:67","08:00:27:da:76:e8","02:42:d3:b9:da:1e","02:42:30:56:51:a1","02:42:a1:0b:66:4d"],"os":{"type":"linux","family":"","platform":"kali","name":"Kali GNU/Linux","version":"2023.1","major":2023,"minor":1,"patch":0,"codename":"kali-rolling"},"timezone":"EDT","timezone_offset_sec":-14400,"id":"3095ed18a81a4f50ba21f01bf6332087"}}}
2023-05-30T10:36:18.894-0400    INFO    [beat]  instance/beat.go:1099   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/home/kali", "exe": "/usr/share/auditbeat/bin/auditbeat", "name": "auditbeat", "pid": 333679, "ppid": 290806, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2023-05-30T10:36:08.940-0400"}}}
2023-05-30T10:36:18.894-0400    INFO    instance/beat.go:292    Setup Beat: auditbeat; Version: 7.17.10
2023-05-30T10:36:18.894-0400    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'auditbeat-7.17.10' as ILM is enabled.
2023-05-30T10:36:18.894-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-05-30T10:36:18.901-0400    INFO    [publisher]     pipeline/module.go:113  Beat name: kali
2023-05-30T10:36:18.904-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-05-30T10:36:18.941-0400    ERROR   [esclientleg]   transport/logging.go:37 Error dialing dial tcp 192.168.2.18:9200: connect: connection refused        {"network": "tcp", "address": "192.168.2.18:9200"}
2023-05-30T10:36:18.942-0400    ERROR   [esclientleg]   eslegclient/connection.go:232   error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connect: connection refused
2023-05-30T10:36:18.943-0400    ERROR   instance/beat.go:1027   Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connect: connection refused]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connect: connection refused]
                                                                                                     


From our Windows Host Machine : 


C:\Program Files\Winlogbeat>winlogbeat.exe setup -e
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.136-0400","log.origin":{"file.name":"instance/beat.go","file.line":779},"message":"Home path: [C:\\Program Files\\Winlogbeat] Config path: [C:\\Program Files\\Winlogbeat] Data path: [C:\\Program Files\\Winlogbeat\\data] Logs path: [C:\\Program Files\\Winlogbeat\\logs]","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.141-0400","log.origin":{"file.name":"instance/beat.go","file.line":787},"message":"Beat ID: 8b031e24-391e-40d2-8773-be0064eae638","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2023-05-30T10:43:21.896-0400","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/provider_aws_ec2.go","file.line":81},"message":"read token request for getting IMDSv2 token returns empty: Put \"http://169.254.169.254/latest/api/token\": dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.. No token in the metadata request will be used.","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.897-0400","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1299},"message":"Beat info","service.name":"winlogbeat","system_info":{"beat":{"path":{"config":"C:\\Program Files\\Winlogbeat","data":"C:\\Program Files\\Winlogbeat\\data","home":"C:\\Program Files\\Winlogbeat","logs":"C:\\Program Files\\Winlogbeat\\logs"},"type":"winlogbeat","uuid":"8b031e24-391e-40d2-8773-be0064eae638"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.898-0400","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1308},"message":"Build info","service.name":"winlogbeat","system_info":{"build":{"commit":"ae3e3f9194a937d20197a7be5d3cbbacaceeb9cc","libbeat":"8.8.0","time":"2023-05-23T01:36:11.000Z","version":"8.8.0"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.899-0400","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1311},"message":"Go runtime info","service.name":"winlogbeat","system_info":{"go":{"os":"windows","arch":"amd64","max_procs":4,"version":"go1.19.9"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-05-30T10:43:21.899-0400","log.logger":"add_cloud_metadata","log.origin":{"file.name":"add_cloud_metadata/add_cloud_metadata.go","file.line":100},"message":"add_cloud_metadata: hosting provider type not detected.","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:22.179-0400","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1317},"message":"Host info","service.name":"winlogbeat","system_info":{"host":{"architecture":"x86_64","boot_time":"2023-05-27T03:24:47-04:00","name":"DESKTOP-66BME4Q","ip":["fe80::565e:d1c3:4ee6:9dd0","fe80::3928:2a82:86cc:a3a","169.254.89.133","fe80::8fd4:5421:717a:acc9","100.120.250.127","fe80::fc6f:4143:1b38:4230","192.168.204.1","fe80::1a2f:143:1ebb:c89b","192.168.56.1","fe80::a393:2cd0:6b57:431c","169.254.238.169","fe80::1ba2:75db:9b0c:e934","169.254.132.80","fe80::5e8f:71f7:3d0:9917","192.168.2.11","fe80::2af8:1d8f:1e9e:f593","169.254.98.231","::1","127.0.0.1"],"kernel_version":"10.0.19041.2965 (WinBuild.160101.0800)","mac":["00:05:9a:3c:7a:00","48:ba:4e:af:d3:3e","0a:00:27:00:00:0a","0a:00:27:00:00:07","f4:96:34:ee:4f:b3","f6:96:34:ee:4f:b2","f4:96:34:ee:4f:b2","f4:96:34:ee:4f:b6"],"os":{"type":"windows","family":"windows","platform":"windows","name":"Windows 10 Home","version":"10.0","major":10,"minor":0,"patch":0,"build":"19045.2965"},"timezone":"EDT","timezone_offset_sec":-14400,"id":"65a3e0e1-d18d-4baf-a4f8-4beb1d0b9b21"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-05-30T10:43:22.179-0400","log.logger":"beat","log.origin":{"file.name":"instance/beat.go","file.line":1346},"message":"Process info","service.name":"winlogbeat","system_info":{"process":{"cwd":"C:\\Program Files\\Winlogbeat","exe":"C:\\Program Files\\Winlogbeat\\winlogbeat.exe","name":"winlogbeat.exe","pid":20380,"ppid":4932,"start_time":"2023-05-30T10:42:59.673-0400"},"ecs.version":"1.6.0"}}
{"log.level":"info","@timestamp":"2023-05-30T10:43:22.182-0400","log.origin":{"file.name":"instance/beat.go","file.line":330},"message":"Setup Beat: winlogbeat; Version: 8.8.0","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:24.870-0400","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":108},"message":"elasticsearch url: http://192.168.2.18:9200","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:24.871-0400","log.logger":"publisher","log.origin":{"file.name":"pipeline/module.go","file.line":105},"message":"Beat name: DESKTOP-66BME4Q","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:24.872-0400","log.logger":"winlogbeat","log.origin":{"file.name":"beater/winlogbeat.go","file.line":70},"message":"State will be read from and persisted to C:\\Program Files\\Winlogbeat\\data\\.winlogbeat.yml","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2023-05-30T10:43:24.873-0400","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":108},"message":"elasticsearch url: http://192.168.2.18:9200","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-30T10:43:26.880-0400","log.logger":"esclientleg","log.origin":{"file.name":"transport/logging.go","file.line":38},"message":"Error dialing dial tcp 192.168.2.18:9200: connectex: No connection could be made because the target machine actively refused it.","service.name":"winlogbeat","network":"tcp","address":"192.168.2.18:9200","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-30T10:43:26.880-0400","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":235},"message":"error connecting to Elasticsearch at http://192.168.2.18:9200: Get \"http://192.168.2.18:9200\": dial tcp 192.168.2.18:9200: connectex: No connection could be made because the target machine actively refused it.","service.name":"winlogbeat","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2023-05-30T10:43:26.882-0400","log.origin":{"file.name":"instance/beat.go","file.line":1274},"message":"Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.2.18:9200: Get \"http://192.168.2.18:9200\": dial tcp 192.168.2.18:9200: connectex: No connection could be made because the target machine actively refused it.]","service.name":"winlogbeat","ecs.version":"1.6.0"}
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connectex: No connection could be made because the target machine actively refused it.]



Here is  my elasticsearch.yml , check  if there is any correction to be made, everything worked fine prior to integrateing  the HTTP.SSL, but since implemented now nothing works  : 


# ======================== Elasticsearch Configuration =========================

#

# NOTE: Elasticsearch comes with reasonable defaults for most settings.

#       Before you set out to tweak and tune the configuration, make sure you

#       understand what are you trying to accomplish and the consequences.

#

# The primary way of configuring a node is via this file. This template lists

# the most important settings you may want to configure for a production cluster.

#

# Please consult the documentation for further information on configuration options:

# https://www.elastic.co/guide/en/elasticsearch/reference/index.html

#

# ---------------------------------- Cluster -----------------------------------

#

# Use a descriptive name for your cluster:

#

#cluster.name: my-application

#

# ------------------------------------ Node ------------------------------------

#

# Use a descriptive name for the node:

#

#node.name: node-1

#

# Add custom attributes to the node:

#

#node.attr.rack: r1

#

# ----------------------------------- Paths ------------------------------------

#

# Path to directory where to store the data (separate multiple locations by comma):

#

path.data: /var/lib/elasticsearch

#

# Path to log files:

#

path.logs: /var/log/elasticsearch

#

# ----------------------------------- Memory -----------------------------------

#

# Lock the memory on startup:

#

#bootstrap.memory_lock: true

#

# Make sure that the heap size is set to about half the memory available

# on the system and that the owner of the process is allowed to use this

# limit.

#

# Elasticsearch performs poorly when the system is swapping the memory.

#

# ---------------------------------- Network -----------------------------------

#

# By default Elasticsearch is only accessible on localhost. Set a different

# address here to expose this node on the network:

#

network.host: 192.168.2.18

#

# By default Elasticsearch listens for HTTP traffic on the first free port it

# finds starting at 9200. Set a specific HTTP port here:

#

#http.port: 9200`

#

# For more information, consult the network module documentation.

#

# --------------------------------- Discovery ----------------------------------

#

# Pass an initial list of hosts to perform discovery when this node is started:

# The default list of hosts is ["127.0.0.1", "[::1]"]

#

discovery.seed_hosts: ["192.168.2.18"]

#

# Bootstrap the cluster using an initial set of master-eligible nodes:

#

#cluster.initial_master_nodes: ["node-1", "node-2"]

#

# For more information, consult the discovery and cluster formation module documentation.

#

# ---------------------------------- Various -----------------------------------

#

# Require explicit names when deleting indices:

#

#action.destructive_requires_name: true

#

# ---------------------------------- Security ----------------------------------

#

#                                 *** WARNING ***

#

# Elasticsearch security features are not enabled by default.

# These features are free, but require configuration changes to enable them.

# This means that users don’t have to provide credentials and can get full access

# to the cluster. Network connections are also not encrypted.

#

# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 

# Refer to the following documentation for instructions.

#

# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html





# Enable security features

xpack.security.enabled: true



xpack.security.enrollment.enabled: true



# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents

xpack.security.http.ssl:

 enabled: true

 key: certs/elastic/elastic.key

 certificate: certs/elastic/elastic.crt 

 certificate_authorities: certs/ca/ca.crt



# Enable encryption and mutual authentication between cluster nodes



#xpack.security.transport.ssl: 

 # enabled: true

  #verification_mode: certificate

  #keystore.path: certs/transport.p12

  #truststore.path: certs/transport.p12



# Create a new cluster with the current node only

# Additional nodes can still join the cluster later

# cluster.initial_master_nodes: ["elastic"]





# Allow HTTP API connections from anywhere

# Connections are encrytpted and require user authentication

http.host: 0.0.0.0

Finally here`s my elasticsearch/beats logs :

┌──(root㉿kali)-[/var/log]                                                                                                    
└─# journalctl  -xeu  elasticsearch


░░ Support: https://www.debian.org/support   
     May 30 10:37:40 kali systemd[1]: Starting elasticsearch.service - LSB: Starts elasticsearch...
░░ Subject: A start job for unit elasticsearch.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit elasticsearch.service has begun execution.
░░ 
░░ The job identifier is 4266.
May 30 10:37:40 kali elasticsearch[334444]: The elasticsearch startup script does not exists or it is not executable, tried: /usr/share/elasticsearch/bin/elasticsearch
May 30 10:37:40 kali systemd[1]: elasticsearch.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ An ExecStart= process belonging to unit elasticsearch.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
May 30 10:37:40 kali systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit elasticsearch.service has entered the 'failed' state with result 'exit-code'.
May 30 10:37:40 kali systemd[1]: Failed to start elasticsearch.service - LSB: Starts elasticsearch.
░░ Subject: A start job for unit elasticsearch.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit elasticsearch.service has finished with a failure.
░░ 
░░ The job identifier is 4266 and the job result is failed.
May 30 10:38:01 kali systemd[1]: Starting elasticsearch.service - LSB: Starts elasticsearch...
░░ Subject: A start job for unit elasticsearch.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit elasticsearch.service has begun execution.
░░ 
░░ The job identifier is 4338.
May 30 10:38:01 kali elasticsearch[334671]: The elasticsearch startup script does not exists or it is not executable, tried: /usr/share/elasticsearch/bin/elasticsearch
May 30 10:38:01 kali systemd[1]: elasticsearch.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ An ExecStart= process belonging to unit elasticsearch.service has exited.
░░ 
░░ The process' exit code is 'exited' and its exit status is 1.
May 30 10:38:01 kali systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit elasticsearch.service has entered the 'failed' state with result 'exit-code'.
May 30 10:38:01 kali systemd[1]: Failed to start elasticsearch.service - LSB: Starts elasticsearch.
░░ Subject: A start job for unit elasticsearch.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit elasticsearch.service has finished with a failure.
░░ 
░░ The job identifier is 4338 and the job result is failed.


┌──(root㉿kali)-[/home/kali]
└─# filebeat  setup -e 
2023-05-30T11:04:54.431-0400    INFO    instance/beat.go:698    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] Hostfs Path: [/]
2023-05-30T11:04:54.473-0400    INFO    instance/beat.go:706    Beat ID: b444d1c7-9c59-400c-987f-5aa7c0891ded
2023-05-30T11:04:57.548-0400    WARN    [add_cloud_metadata]    add_cloud_metadata/provider_aws_ec2.go:79       read token request for getting IMDSv2 token returns empty: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.
2023-05-30T11:04:57.699-0400    INFO    [beat]  instance/beat.go:1052   Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "b444d1c7-9c59-400c-987f-5aa7c0891ded"}}}
2023-05-30T11:04:57.699-0400    INFO    [beat]  instance/beat.go:1061   Build info      {"system_info": {"build": {"commit": "78a342312954e587301b653093954ff7ee4d4f2b", "libbeat": "7.17.10", "time": "2023-04-23T09:00:42.000Z", "version": "7.17.10"}}}
2023-05-30T11:04:57.699-0400    INFO    [beat]  instance/beat.go:1064   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.19.7"}}}
2023-05-30T11:04:57.700-0400    INFO    [beat]  instance/beat.go:1070   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2023-05-29T17:55:35-04:00","containerized":false,"name":"kali","ip":["127.0.0.1","::1","192.168.2.18","fe80::a00:27ff:feb1:9d67","fe80::a00:27ff:feda:76e8","172.17.0.1","172.20.0.1","172.18.0.1"],"kernel_version":"6.1.0-kali7-amd64","mac":["08:00:27:b1:9d:67","08:00:27:da:76:e8","02:42:d3:b9:da:1e","02:42:30:56:51:a1","02:42:a1:0b:66:4d"],"os":{"type":"linux","family":"","platform":"kali","name":"Kali GNU/Linux","version":"2023.1","major":2023,"minor":1,"patch":0,"codename":"kali-rolling"},"timezone":"EDT","timezone_offset_sec":-14400,"id":"3095ed18a81a4f50ba21f01bf6332087"}}}
2023-05-30T11:04:57.701-0400    INFO    [beat]  instance/beat.go:1099   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read","38","39","40"],"ambient":null}, "cwd": "/home/kali", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 348625, "ppid": 290841, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2023-05-30T11:04:46.580-0400"}}}
2023-05-30T11:04:57.701-0400    INFO    instance/beat.go:292    Setup Beat: filebeat; Version: 7.17.10
2023-05-30T11:04:57.701-0400    INFO    [index-management]      idxmgmt/std.go:184      Set output.elasticsearch.index to 'filebeat-7.17.10' as ILM is enabled.
2023-05-30T11:04:57.702-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-05-30T11:04:57.781-0400    INFO    [publisher]     pipeline/module.go:113  Beat name: kali
2023-05-30T11:04:57.899-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-05-30T11:04:57.901-0400    ERROR   [esclientleg]   transport/logging.go:37 Error dialing dial tcp 192.168.2.18:9200: connect: connection refused {"network": "tcp", "address": "192.168.2.18:9200"}
2023-05-30T11:04:57.901-0400    ERROR   [esclientleg]   eslegclient/connection.go:232   error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connect: connection refused
2023-05-30T11:04:57.902-0400    ERROR   instance/beat.go:1027   Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.2.18:9200: Get "http://192.168.2.18:9200": dial tcp 192.168.2.18:9200: connect: connection refused]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http

Here is my kibana file.yml as well as the kibana logs :


# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.2.18"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "kibana-singlenode"

# The URLs of the Elasticsearch instances to use for all your queries.

elasticsearch.hosts: ["http://192.168.2.18:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# If may use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"



# -----------------------------System: Kibana Server  (Optional)  ---------------------------------# 

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.

server.ssl.enabled: true
server.ssl.certifcateAuthorities: ["/etc/kibana/certs/elasticcerts/ca.crt"]
server.ssl.certificate: /etc/kibana/certs/kibanacerts/kibana.crt
server.ssl.key: /etc/kibana/certs/kibanacerts/kibana.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid

# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

And finally here are my logs for Kibana :

# Kibana is served by a back end server. This setting specifies the port to use.

server.port: 5601



# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

# The default is 'localhost', which usually means remote machines will not be able to connect.

# To allow connections from remote users, set this parameter to a non-loopback address.

server.host: "192.168.2.18"



# Enables you to specify a path to mount Kibana at if you are running behind a proxy.

# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath

# from requests it receives, and to prevent a deprecation warning at startup.

# This setting cannot end in a slash.

#server.basePath: ""



# Specifies whether Kibana should rewrite requests that are prefixed with

# `server.basePath` or require that they are rewritten by your reverse proxy.

# This setting was effectively always `false` before Kibana 6.3 and will

# default to `true` starting in Kibana 7.0.

#server.rewriteBasePath: false



# Specifies the public URL at which Kibana is available for end users. If

# `server.basePath` is configured this URL should end with the same basePath.

#server.publicBaseUrl: ""



# The maximum payload size in bytes for incoming server requests.

#server.maxPayload: 1048576



# The Kibana server's name.  This is used for display purposes.

#server.name: "kibana-singlenode"



# The URLs of the Elasticsearch instances to use for all your queries.



elasticsearch.hosts: ["http://192.168.2.18:9200"]



# Kibana uses an index in Elasticsearch to store saved searches, visualizations and

# dashboards. Kibana creates a new index if the index doesn't already exist.

#kibana.index: ".kibana"



# The default application to load.

#kibana.defaultAppId: "home"



# If your Elasticsearch is protected with basic authentication, these settings provide

# the username and password that the Kibana server uses to perform maintenance on the Kibana

# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which

# is proxied through the Kibana server.

#elasticsearch.username: "kibana_system"

#elasticsearch.password: "pass"



# Kibana can also authenticate to Elasticsearch via "service account tokens".

# If may use this token instead of a username/password.

# elasticsearch.serviceAccountToken: "my_token"







# -----------------------------System: Kibana Server  (Optional)  ---------------------------------# 



# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.

# These settings enable SSL for outgoing requests from the Kibana server to the browser.



server.ssl.enabled: true

server.ssl.certifcateAuthorities: ["/etc/kibana/certs/elasticcerts/ca.crt"]

server.ssl.certificate: /etc/kibana/certs/kibanacerts/kibana.crt

server.ssl.key: /etc/kibana/certs/kibanacerts/kibana.key



# Optional settings that provide the paths to the PEM-format SSL certificate and key files.

# These files are used to verify the identity of Kibana to Elasticsearch and are required when

# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.

#elasticsearch.ssl.certificate: /path/to/your/client.crt

#elasticsearch.ssl.key: /path/to/your/client.key



# Optional setting that enables you to specify a path to the PEM file for the certificate

# authority for your Elasticsearch instance.

#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]



# To disregard the validity of SSL certificates, change this setting's value to 'none'.

#elasticsearch.ssl.verificationMode: full



# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of

# the elasticsearch.requestTimeout setting.

#elasticsearch.pingTimeout: 1500



# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value

# must be a positive integer.

#elasticsearch.requestTimeout: 30000



# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side

# headers, set this value to [] (an empty list).

#elasticsearch.requestHeadersWhitelist: [ authorization ]



# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten

# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.

#elasticsearch.customHeaders: {}



# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.

#elasticsearch.shardTimeout: 30000



# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.

#elasticsearch.logQueries: false



# Specifies the path where Kibana creates the process ID file.

#pid.file: /run/kibana/kibana.pid



# Enables you to specify a file where Kibana stores log output.

#logging.dest: stdout



# Set the value of this setting to true to suppress all logging output.

#logging.silent: false



# Set the value of this setting to true to suppress all logging output other than error messages.

#logging.quiet: false



# Set the value of this setting to true to log all events, including system usage information

# and all requests.

#logging.verbose: false



# Set the interval in milliseconds to sample system and process performance

# metrics. Minimum is 100ms. Defaults to 5000.

#ops.interval: 5000



# Specifies locale to be used for all localizable strings, dates and number formats.

# Supported languages are the following: English - en , by default , Chinese - zh-CN .

#i18n.locale: "en"


┌──(root㉿kali)-[/usr/share/kibana/bin]
└─# dpkg -s kibana                     
Package: kibana
Status: install ok installed
Priority: optional
Section: default
Installed-Size: 676461
Maintainer: Kibana Team <info@elastic.co>
Architecture: amd64
Version: 7.17.10
Conffiles:
 /etc/default/kibana 1acc555ffb9bbd043915e6eb58a13e87
 /etc/init.d/kibana 5c8266b890bac4f30318652278482083
 /etc/kibana/kibana.yml 82aed7c451612e059cf132cd3da6d15f
 /etc/kibana/node.options 2d137e596cf08d7ba5934effed3b723b
 /etc/systemd/system/kibana.service d9cf6ff125f5ba348a1685f0bea7da7d
Description: Explore and visualize your Elasticsearch data
License: Elastic-License
Vendor: Elasticsearch, Inc.
Homepage: https://www.elastic.co

Recently i have constructed a docker-compose.yml file which starts both the Kibana + Elasticsearch instances :

Docker-compose.yml 



version: '2.2'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - network.host=0.0.0.0
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 192.168.2.18:9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:7.17.0
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: "http://elasticsearch:9200"
      SERVER_HOST: "0.0.0.0"
    ports:
      - 192.168.2.18:5601:5601
    depends_on:
      - elasticsearch

volumes:
  esdata1:
    driver: local

Auditbeat works with this method, using the docker-compose.yml :


2023-06-03T14:54:33.931-0400    INFO    [publisher]     pipeline/module.go:113  Beat name: kali
2023-06-03T14:54:33.933-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-06-03T14:54:34.058-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0
Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling.

2023-06-03T14:54:34.187-0400    INFO    [index-management]      idxmgmt/std.go:260      Auto ILM enable success.
2023-06-03T14:54:35.104-0400    INFO    [index-management.ilm]  ilm/std.go:170  ILM policy auditbeat exists already.
2023-06-03T14:54:35.104-0400    INFO    [index-management]      idxmgmt/std.go:396      Set setup.template.name to '{auditbeat-7.17.10 {now/d}-000001}' as ILM is enabled.
2023-06-03T14:54:35.104-0400    INFO    [index-management]      idxmgmt/std.go:401      Set setup.template.pattern to 'auditbeat-7.17.10-*' as ILM is enabled.
2023-06-03T14:54:35.104-0400    INFO    [index-management]      idxmgmt/std.go:435      Set settings.index.lifecycle.rollover_alias in template to {auditbeat-7.17.10 {now/d}-000001} as ILM is enabled.
2023-06-03T14:54:35.104-0400    INFO    [index-management]      idxmgmt/std.go:439      Set settings.index.lifecycle.name in template to {auditbeat {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2023-06-03T14:54:35.959-0400    INFO    template/load.go:197    Existing template will be overwritten, as overwrite is enabled.
2023-06-03T14:54:45.355-0400    INFO    template/load.go:131    Try loading template auditbeat-7.17.10 to Elasticsearch
2023-06-03T14:54:49.661-0400    INFO    template/load.go:123    Template with name "auditbeat-7.17.10" loaded.
2023-06-03T14:54:49.661-0400    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-03T14:54:50.533-0400    INFO    [index-management.ilm]  ilm/std.go:126  Index Alias auditbeat-7.17.10 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-03T14:54:50.570-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-03T14:54:55.125-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-03T14:55:16.024-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.

# Loaded dashboards. 




As  i tried  to connect `filebeat`, with my docker container hosting  both  kibana + elasticsearch , it fails to work !!!! 


2023-06-03T00:36:59.970-0400	INFO	instance/beat.go:698	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] Hostfs Path: [/]
2023-06-03T00:37:00.049-0400	INFO	instance/beat.go:706	Beat ID: b444d1c7-9c59-400c-987f-5aa7c0891ded
2023-06-03T00:37:03.064-0400	WARN	[add_cloud_metadata]	add_cloud_metadata/provider_aws_ec2.go:79	read token request for getting IMDSv2 token returns empty: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.
2023-06-03T00:37:03.132-0400	INFO	[index-management]	idxmgmt/std.go:184	Set output.elasticsearch.index to 'filebeat-7.17.10' as ILM is enabled.
2023-06-03T00:37:03.132-0400	INFO	[esclientleg]	eslegclient/connection.go:105	elasticsearch url: http://192.168.2.18:9200
2023-06-03T00:37:03.146-0400	INFO	[esclientleg]	eslegclient/connection.go:285	Attempting to connect to Elasticsearch version 7.17.0`


Here is my filebeat.yml file : 

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the output.

#fields:
# env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.2.18:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.2.18:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

`````


`````

$ filebeat setup -e 


2023-06-03T15:03:22.182-0400    INFO    template/load.go:197    Existing template will be overwritten, as overwrite is enabled.
2023-06-03T15:03:24.142-0400    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:101    add_cloud_metadata: hosting provider type not detected.
2023-06-03T15:03:39.148-0400    INFO    template/load.go:131    Try loading template filebeat-7.17.10 to Elasticsearch
2023-06-03T15:03:39.848-0400    INFO    template/load.go:123    Template with name "filebeat-7.17.10" loaded.
2023-06-03T15:03:39.848-0400    INFO    [index-management]      idxmgmt/std.go:296      Loaded index template.
2023-06-03T15:03:39.869-0400    INFO    [index-management.ilm]  ilm/std.go:126  Index Alias filebeat-7.17.10 exists already.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
2023-06-03T15:03:39.952-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-03T15:03:45.950-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-03T15:05:19.502-0400    INFO    instance/beat.go:881    Kibana dashboards successfully loaded.
Loaded dashboards
2023-06-03T15:05:19.504-0400    WARN    [cfgwarn]       instance/beat.go:606    DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
2023-06-03T15:05:19.505-0400    INFO    [esclientleg]   eslegclient/connection.go:105   elasticsearch url: http://192.168.2.18:9200
2023-06-03T15:05:19.519-0400    INFO    [esclientleg]   eslegclient/connection.go:285   Attempting to connect to Elasticsearch version 7.17.0
2023-06-03T15:05:19.522-0400    INFO    kibana/client.go:180    Kibana url: http://192.168.2.18:5601
2023-06-03T15:05:19.592-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled
2023-06-03T15:05:19.742-0400    WARN    fileset/modules.go:463  X-Pack Machine Learning is not enabled
2023-06-03T15:05:19.783-0400    ERROR   instance/beat.go:1027   Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key

# Exiting: 1 error: error loading config file: invalid config: yaml: line 85: did not find expected key
                                                           



$ journalctl --unit  filebeat.service 


┌──(root㉿kali)-[/home/kali]                                                                                                                                                                                      
└─# journalctl  -xeu  filebeat.service                                                                                                                                                                            
Jun 03 14:59:10 kali filebeat[915379]: 2023-06-03T14:59:10>                                                                                                                                                       
Jun 03 14:59:40 kali filebeat[915379]: 2023-06-03T14:59:40>                                                                                                                                                       
Jun 03 15:00:10 kali filebeat[915379]: 2023-06-03T15:00:10>                                                                                                                                                       
Jun 03 15:00:40 kali filebeat[915379]: 2023-06-03T15:00:40>                                                                                                                                                       
Jun 03 15:01:10 kali filebeat[915379]: 2023-06-03T15:01:10>                                                                                                                                                       
Jun 03 15:01:10 kali filebeat[915379]: 2023-06-03T15:01:10>                                                                                                                                                       
Jun 03 15:01:40 kali filebeat[915379]: 2023-06-03T15:01:40>                                                                                                                                                       
Jun 03 15:02:10 kali filebeat[915379]: 2023-06-03T15:02:10>                                                                                                                                                       
Jun 03 15:02:40 kali filebeat[915379]: 2023-06-03T15:02:40>                                                                                 
Jun 03 15:03:10 kali filebeat[915379]: 2023-06-03T15:03:10>                                                                                 
Jun 03 15:03:40 kali filebeat[915379]: 2023-06-03T15:03:40>                                                                                 
Jun 03 15:04:10 kali filebeat[915379]: 2023-06-03T15:04:10>                                                                                 
Jun 03 15:04:41 kali filebeat[915379]: 2023-06-03T15:04:41>                                                                                 
Jun 03 15:05:10 kali filebeat[915379]: 2023-06-03T15:05:10>                                                                                 
lines 987-1000/1000 (END)                                                                                                                   
Jun 03 14:59:10 kali filebeat[915379]: 2023-06-03T14:59:10.107-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 14:59:40 kali filebeat[915379]: 2023-06-03T14:59:40.100-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 15:00:10 kali filebeat[915379]: 2023-06-03T15:00:10.099-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 15:00:40 kali filebeat[915379]: 2023-06-03T15:00:40.395-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 15:01:10 kali filebeat[915379]: 2023-06-03T15:01:10.153-0400        INFO        [add_docker_metadata]        docker/watcher.go:309  >
Jun 03 15:01:10 kali filebeat[915379]: 2023-06-03T15:01:10.243-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 15:01:40 kali filebeat[915379]: 2023-06-03T15:01:40.650-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>
Jun 03 15:02:10 kali filebeat[915379]: 2023-06-03T15:02:10.097-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
Jun 03 15:02:40 kali filebeat[915379]: 2023-06-03T15:02:40.135-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      Jun 03 15:03:10 kali filebeat[915379]: 2023-06-03T15:03:10.108-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
Jun 03 15:03:40 kali filebeat[915379]: 2023-06-03T15:03:40.301-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
Jun 03 15:04:10 kali filebeat[915379]: 2023-06-03T15:04:10.165-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
Jun 03 15:04:41 kali filebeat[915379]: 2023-06-03T15:04:41.307-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
Jun 03 15:05:10 kali filebeat[915379]: 2023-06-03T15:05:10.166-0400        INFO        [monitoring]        log/log.go:184        Non-zero m>                                                                      
lines 987-1000/1000 (END)                           
 ````

we can close this.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.