Thank you for the response - sure thing:
LOGSTASH (truncated with only the relevant items removed the commented out bits)
# Settings file in YAML
path.data: /var/lib/logstash
config.support_escapes: true
log.level: trace
path.logs: /var/log/logstash
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.url: ["http://<my_actual_IP_goes_here_and_is_right>:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: true
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true
Filebeat that runs on pFsense
#========================= Filebeat global options ============================
filebeat.config:
modules:
enabled: false
path: /var/db/beats/filebeat/modules.d/*.yml
#------------------------- File prospectors --------------------------------
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/suricata/*/eve.json*
fields_under_root: true
fields:
type: "suricataIDPS"
tags: ["SuricataIDPS","JSON"]
- type: log
enabled: true
paths:
- /var/log/snort/snort_pppoe023414/alert
fields_under_root: true
fields:
event.type: snort
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["<my_actual_IP_goes_here_and_is_right>:5044"]
#---------------------------- filebeat logging -------------------------------
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat.log
keepfiles: 1
Additional info:
Just checking the ELK env variable that are set for the logstash process that is running
[thanks for the help logstash]$ sudo cat /proc/28572/environ | tr '\0' '\n'
[sudo] password for theuser:
GEM_HOME=/usr/share/logstash/vendor/bundle/jruby/2.3.0
SHELL=/sbin/nologin
LS_GROUP=logstash
LS_HOME=/usr/share/logstash
LS_NICE=19
LS_JVM_OPTS=/etc/logstash/jvm.options
JAVA_OPTS=-Xms24g -Xmx24g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -
XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -
Djava.awt.headless=true
-Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -
XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom
USER=logstash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
LS_OPEN_FILES=223546
PWD=/
LS_SETTINGS_DIR=/etc/logstash
LS_PIDFILE=/var/run/logstash.pid
SERVICE_NAME=logstash
LS_USER=logstash
HOME=/usr/share/logstash
LOGNAME=logstash
LS_GC_LOG_FILE=/var/log/logstash/gc.log
GEM_PATH=/usr/share/logstash/vendor/bundle/jruby/2.3.0
JAVACMD=/bin/java
LOGSTASH_HOME=/usr/share/logstash
[thanks for the help logstash]$
Making sure ports are open
On pfSense to talk to logstash
[2.4.4-RELEASE][howsYourFater]/root: netstat
Active Internet connections
Proto Recv-Q Send-Q Local Address Foreign Address (state)
tcp4 0 0 <my_actual_IP_goes_here_and_is_right>.19507
<my_actual_IP_goes_here_and_is_right>.5044 ESTABLISHED
On ELK server
[gotta love a good impossible problem to fix ~]$ sudo netstat -tulpn | grep 5044
[sudo] password for bigDawg:
tcp6 0 0 :::5044 :::* LISTEN 28572/java
The tail I am running on the logstash logs
tail -f -s 2 /var/log/logstash/logstash-plain.log | grep --line-buffer 'logstash.pipeline.*Pushing flush onto pipeline*
The endless mindless nightmare:
[2018-10-09T19:00:38,186][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1d86d302 sleep>"}
[2018-10-09T19:00:38,186][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
{:pipeline_id=>"snort", :thread=>"#<Thread:0x314f8d77 sleep>"}
[2018-10-09T19:00:43,327][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1d86d302 sleep>"}
On the suricata pipe all is good and its being logged into the "main" pipe and working off index pattern logstash-*
On monitoring-logstash - not to worried about that I am sure its a config error but will focus on that later
My endless head ache and hair loss is around the snort pipe.
Pipleline Monitoring:
a quick tail on the elasticsearch log
[2018-10-09T10:12:32,779][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2018-10-09T18:48:48,054][INFO ][o.e.c.m.MetaDataMappingService] [d9cPBFV] [packetbeat-6.4.2-2018.10.09/aix6oL-kRIW9G1JPaxnLaA] update_mapping [doc]
[2018-10-09T18:51:00,328][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [snort-1.0] for index patterns [snort-*]
[2018-10-09T19:00:23,630][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-*]
[2018-10-09T23:17:04,638][INFO ][o.e.c.m.MetaDataMappingService] [d9cPBFV] [logstash-2018.41/2hFFkxiQSnCUNCKPH1ny6g] update_mapping [doc]
A look at the indexies - this is after weeks of rebooting testing configs etc - notice no snort....
More than happy to share anything else that you might require to help solve this world peace.