Logstash and the famous flushing of pipelines

Hi all,

I have got almost every module working on ELK and really enjoying ELK. There is one caveat which I just can't get around and its been keeping me up till early hours of the morning :slight_smile:

I have filebeats on my pfsense firewall. It works REALLY well with the Suricata logs and in the filebeat.yml into logstash (not elasticsearch)

I was then adding an additional input to harvest my snort logs. This is in the same filebeat.yml. I have the host set to IP:5044

Doing a netstat -tulpn - I can see port 5044 open, which is correct as thats logstash port and the Suricata logs are working.

On logstash startup - no errors on any of the lines (debug set to trace). I can see the following for my snort logs:
Now this is on logstash instance (with Kibana and Elasticsearch) - this is a server with 128gigs mem and Xenon. Heap Size is 24G for each of the stacks (LS, K, ES)

  1. grok filter loaded -
  2. pipeline created
  3. actual logs flowing (tailing the filebeats log on pfsense)
  4. Data hitting logstash
  5. Data being transformed as per the filter and the output.
  6. Specific line that output>> and it looks 100%

Then a silence of the lambs - getting the dreaded Flushing Pipeline message for the snort pipeline that was just created. No matter where I search or read I don't get anything definitive on what Flushing a Pipeline means....

What is definitive is when i see that (for any of my pipelines) no data ends up in ElasticSearch.
The only time I have ever seen data ending up in ES when I noticed a Flushed Pipeline is the "main" pipeline it would now and again pop up (Flushing Pipeline => main) but the Logstash data would still end up in ES.

Kibana has

  1. The snort pipeline (in the monitoring interface - but 0 happening.
  2. The index pattern I imported - All the correct fields

ElasticSearch has

  1. The template for snort that I have
  2. NO indexies for Snort.

On ElasticSearch logs
The last line I see in the logs (debug) is that it creates a template for the snort logs (set to overwrite = true), confirmed by querying for that template and it is there.

On Logstash logs
Everything seems to be normal I am tailing the logs with a grep error* and with *warning and nothing is popping up. Even sat for an hour reading through a 5 hr run of logs line by line and nothing that seems to be causing this - just keep getting flushing pipeline => snort ...

On pfsense the only approved beats (from their repo) package is 6.3.2 & it is working for Suricata logs .... link to pfsense package: on pfsense repo here

My Elk stack is sitting at 6.4.2-1

Would really appreciate help guidance or a simple pointer

Please show your configuration files (Logstash and Filebeat).

Thank you for the response - sure thing:

LOGSTASH (truncated with only the relevant items removed the commented out bits)

 # Settings file in YAML
 path.data: /var/lib/logstash
config.support_escapes: true
log.level: trace
path.logs: /var/log/logstash
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.url: ["http://<my_actual_IP_goes_here_and_is_right>:9200"]
#xpack.monitoring.elasticsearch.ssl.ca: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.elasticsearch.sniffing: true
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true

Filebeat that runs on pFsense

#========================= Filebeat global options ============================
filebeat.config:
  modules:
    enabled: false
    path: /var/db/beats/filebeat/modules.d/*.yml
#------------------------- File prospectors --------------------------------
filebeat.inputs:

- type: log
  enabled: true
  paths:
  - /var/log/suricata/*/eve.json*
  fields_under_root: true
  fields:
    type: "suricataIDPS"
    tags: ["SuricataIDPS","JSON"]

- type: log
  enabled: true
  paths:
  - /var/log/snort/snort_pppoe023414/alert
  fields_under_root: true
  fields:
    event.type: snort
 
#----------------------------- Logstash output --------------------------------
output.logstash:
  hosts: ["<my_actual_IP_goes_here_and_is_right>:5044"]

#---------------------------- filebeat logging -------------------------------

logging.to_files: true 
logging.files:
  path: /var/log/filebeat
  name: filebeat.log
  keepfiles: 1

Additional info:

Just checking the ELK env variable that are set for the logstash process that is running

[thanks for the help logstash]$ sudo cat /proc/28572/environ | tr '\0' '\n'
[sudo] password for theuser: 
GEM_HOME=/usr/share/logstash/vendor/bundle/jruby/2.3.0
SHELL=/sbin/nologin
LS_GROUP=logstash
LS_HOME=/usr/share/logstash
LS_NICE=19
LS_JVM_OPTS=/etc/logstash/jvm.options
JAVA_OPTS=-Xms24g -Xmx24g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC - 
XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly - 
Djava.awt.headless=true 
-Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 - 
XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom  
USER=logstash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
LS_OPEN_FILES=223546
PWD=/
LS_SETTINGS_DIR=/etc/logstash
LS_PIDFILE=/var/run/logstash.pid
SERVICE_NAME=logstash
LS_USER=logstash
HOME=/usr/share/logstash
LOGNAME=logstash
LS_GC_LOG_FILE=/var/log/logstash/gc.log
GEM_PATH=/usr/share/logstash/vendor/bundle/jruby/2.3.0
JAVACMD=/bin/java
LOGSTASH_HOME=/usr/share/logstash
[thanks for the help logstash]$ 

Making sure ports are open

On pfSense to talk to logstash

[2.4.4-RELEASE][howsYourFater]/root: netstat
Active Internet connections
Proto Recv-Q Send-Q Local Address          Foreign Address        (state)
tcp4       0      0 <my_actual_IP_goes_here_and_is_right>.19507         
<my_actual_IP_goes_here_and_is_right>.5044          ESTABLISHED

On ELK server

[gotta love a good impossible problem to fix ~]$ sudo netstat -tulpn | grep 5044
[sudo] password for bigDawg: 
tcp6       0      0 :::5044                 :::*                    LISTEN      28572/java 

The tail I am running on the logstash logs

tail -f -s 2 /var/log/logstash/logstash-plain.log | grep --line-buffer 'logstash.pipeline.*Pushing flush onto pipeline*

The endless mindless nightmare:

[2018-10-09T19:00:38,186][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1d86d302 sleep>"}
 [2018-10-09T19:00:38,186][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline 
{:pipeline_id=>"snort", :thread=>"#<Thread:0x314f8d77 sleep>"}
    [2018-10-09T19:00:43,327][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x1d86d302 sleep>"}

On the suricata pipe all is good and its being logged into the "main" pipe and working off index pattern logstash-*

On monitoring-logstash - not to worried about that I am sure its a config error but will focus on that later
My endless head ache and hair loss is around the snort pipe.

Pipleline Monitoring:

a quick tail on the elasticsearch log

[2018-10-09T10:12:32,779][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2018-10-09T18:48:48,054][INFO ][o.e.c.m.MetaDataMappingService] [d9cPBFV] [packetbeat-6.4.2-2018.10.09/aix6oL-kRIW9G1JPaxnLaA] update_mapping [doc]
[2018-10-09T18:51:00,328][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [snort-1.0] for index patterns [snort-*]
[2018-10-09T19:00:23,630][INFO ][o.e.c.m.MetaDataIndexTemplateService] [d9cPBFV] adding template [elastiflow-3.3.0] for index patterns [elastiflow-3.3.0-*]
[2018-10-09T23:17:04,638][INFO ][o.e.c.m.MetaDataMappingService] [d9cPBFV] [logstash-2018.41/2hFFkxiQSnCUNCKPH1ny6g] update_mapping [doc]

A look at the indexies - this is after weeks of rebooting testing configs etc - notice no snort....

More than happy to share anything else that you might require to help solve this world peace.

@magnusbaeck - do you see anything funky there in my setup?
I have also done a update on all the plugins for logstash just as a "just in case"

What about your Logstash pipeline configuration (with your inputs, filters, and outputs)?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.