Hello!
Please forgive any "syntax errors" as I am a complete newb to all of the elastic stack.
I am working on getting elasticstack working for my small IT business. I have a few servers that I am interested in monitoring network traffic for them as well as the IoT devices I have connected. I have a Mikrotik router and plan on sending IPFIX data for viewing in Elastic (via Kibana?).
I have the elastic stack (elasticsearch 8.6, kibana, filebeat/logstash) set up on Ubuntu 20.04.5 LTS in a proxmox container. The container is setup to use 4 processors and up to 8GB of RAM.
So far (after a full day of fighting it) I am able to get logged into elastic using the xpack security and everything (this was a challenge for me) even though I'm using the very insecure method of including the elastic password wherever it was necessary. I'll have to work on figuring out how to use the keystores after I can actually get some data in.
Right now, I am trying to setup logstash. I was able to get filebeats installed, and enabled both the logstash and netflow modules with the following configurations. PS I know all of the sensitives are here but as I mentioned, once this is running, it'll be nuked and set up with proper security:
Filebeats Config:
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-filestream-id
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
# Line filtering happens after the parsers pipeline. If you would like to filter lines
# before parsers, use include_message parser.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#prospector.scanner.exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "10.0.0.5:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "elastic"
password: "c3rT3lqXkVFsQDFvWyVm"
ssl:
enabled: true
ca_trusted_fingerprint: "8ACB77F40785143D43C142EBE040BEF83A555869BC1F83E8FE1AB3613DCABEFB"
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
Netflow Module Config:
# Module: netflow
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.6/filebeat-module-netflow.html
- module: netflow
log:
enabled: true
var:
netflow_host: 10.0.0.5
netflow_port: 2055
# internal_networks specifies which networks are considered internal or private
# you can specify either a CIDR block or any of the special named ranges listed
# at: https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html#condition-network
internal_networks:
- private
Logstash Config
# Module: logstash
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.6/filebeat-module-logstash.html
- module: logstash
# logs
log:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Slow logs
slowlog:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
- module: elasticsearch
server:
enabled: true
slowlog:
enabled: true
When I use systemctl to start filebeat, it just restarts constantly. I can't find anything in the logs for whats happening. This is the output from //while true; do systemctl status filebeat | grep Active:; sleep 1; done;
Notice that the daemon restarts at most every two seconds.
Active: active (running) since Sat 2023-03-11 21:38:45 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:38:45 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:38:48 EST; 172ms ago
Active: active (running) since Sat 2023-03-11 21:38:48 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:38:48 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:38:48 EST; 3s ago
Active: active (running) since Sat 2023-03-11 21:38:52 EST; 717ms ago
Active: active (running) since Sat 2023-03-11 21:38:52 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:38:52 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:38:55 EST; 247ms ago
Active: active (running) since Sat 2023-03-11 21:38:55 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:38:55 EST; 2s ago
Active: activating (auto-restart) (Result: exit-code) since Sat 2023-03-11 21:38:59 EST; 58ms ago
Active: active (running) since Sat 2023-03-11 21:38:59 EST; 786ms ago
Active: active (running) since Sat 2023-03-11 21:38:59 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:38:59 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:02 EST; 318ms ago
Active: active (running) since Sat 2023-03-11 21:39:02 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:02 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:06 EST; 4ms ago
Active: active (running) since Sat 2023-03-11 21:39:06 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:06 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:06 EST; 3s ago
Active: active (running) since Sat 2023-03-11 21:39:09 EST; 643ms ago
Active: active (running) since Sat 2023-03-11 21:39:09 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:09 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:13 EST; 173ms ago
Active: active (running) since Sat 2023-03-11 21:39:13 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:13 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:13 EST; 3s ago
Active: active (running) since Sat 2023-03-11 21:39:16 EST; 718ms ago
Active: active (running) since Sat 2023-03-11 21:39:16 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:16 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:20 EST; 246ms ago
Active: active (running) since Sat 2023-03-11 21:39:20 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:20 EST; 2s ago
Active: activating (auto-restart) (Result: exit-code) since Sat 2023-03-11 21:39:23 EST; 61ms ago
Active: active (running) since Sat 2023-03-11 21:39:23 EST; 784ms ago
Active: active (running) since Sat 2023-03-11 21:39:23 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:23 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:27 EST; 315ms ago
Active: active (running) since Sat 2023-03-11 21:39:27 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:27 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:30 EST; 3ms ago
Active: active (running) since Sat 2023-03-11 21:39:30 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:30 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:30 EST; 3s ago
Active: active (running) since Sat 2023-03-11 21:39:33 EST; 642ms ago
Active: active (running) since Sat 2023-03-11 21:39:33 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:33 EST; 2s ago
Active: active (running) since Sat 2023-03-11 21:39:37 EST; 176ms ago
Active: active (running) since Sat 2023-03-11 21:39:37 EST; 1s ago
Active: active (running) since Sat 2023-03-11 21:39:37 EST; 2s ago
^C
I made one egregious screw-up and deleted the filebeat log. I was getting confused at which was the latest and just wanted a fresh one (which didn't happen). I noticed that the latest two logs were much smaller than the initial one I was looking at. This entry was the only error I saw in that log.
{"log.level":"error","@timestamp":"2023-03-11T20:37:47.396-0500","log.origin":{"file.name":"cfgfile/reload.go","file.line":258},"message":"Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.vpcflow' (source:'/etc/filebeat/modules.d/gcp.yml.disabled')","service.name":"filebeat","ecs.version":"1.6.0"}
I am not sure what other information I can provide to help. Please let me know if any further information is necessary and how to get it (still newbing over here). Thanks in advance to anyone who made it this far down the post. I was trying to be as thorough as possible.