Pushing flush onto pipeline in logs

Hi all!
I hope someone could help me because I dig the entire internet without finding a solution..
Here we are:
I have a filebeat agent running on pfsense 2.3.x ( filebeat version 6.0.0-alpha3-git877f311 (amd64), libbeat 6.0.0-alpha3-git877f311). I use it to manage my snort logs:

cat filebeat.yml
filebeat:
  prospectors:
    -
      paths:
        - /var/log/snort/*/alert
      input_type: log
      document_type: SnortIDPS
output:
  logstash:
    enabled: true
    timeout: 15
    index: filebeat
    hosts: ["192.168.4.10:5000"]

So I send my events to a logstash server, but sometimes the logs give me this:

ERR Failed to publish events: write tcp 192.168.4.1:26869->192.168.4.10:5000: write: broken pipe

On the logstash side, here is my configuration:

input {
   beats {
     type => "beats"
     port => 5000
   }
}
output {

if [type] == "SnortIDPS" {
    elasticsearch {
      hosts => localhost
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
    }
}

The debug logs keep have this message:

[2017-07-07T19:43:23,151][DEBUG][logstash.pipeline        ] filter received {"event"=>{"@timestamp"=>2017-07-07T23:43:15.453Z, "offset"=>251146, "@version"=>"1", "beat"=>{"hostname"=>"fw001", "name"=>"fw001", "version"=>"6.0.0-alpha3-git877f311"}, "host"=>"fw001", "prospector"=>{"type"=>""}, "source"=>"/var/log/snort/snort_em1_vlan536159/alert", "message"=>"07/07/17-19:06.712347 ,119,33,1,\"(http_inspect) UNESCAPED SPACE IN HTTP URI\",TCP,192.168.5.20,41590,192.168.4.2,3128,0,Unknown Traffic,3,", "type"=>"beats", "tags"=>["beats_input_codec_plain_applied"]}}
[2017-07-07T19:43:23,157][DEBUG][logstash.pipeline        ] output received {"event"=>{"@timestamp"=>2017-07-07T23:43:15.453Z, "offset"=>251146, "@version"=>"1", "beat"=>{"hostname"=>"fw001", "name"=>"fw001", "version"=>"6.0.0-alpha3-git877f311"}, "host"=>"fw001", "prospector"=>{"type"=>""}, "source"=>"/var/log/snort/snort_em1_vlan536159/alert", "message"=>"07/07/17-19:43:06.712347 ,119,33,1,\"(http_inspect) UNESCAPED SPACE IN HTTP URI\",TCP,192.168.5.20,41590,192.168.4.2,3128,0,Unknown Traffic,3,", "type"=>"beats", "tags"=>["beats_input_codec_plain_applied"]}}
[2017-07-07T19:43:26,941][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-07-07T19:43:31,942][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-07-07T19:43:36,943][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-07-07T19:43:41,944][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-07-07T19:43:46,943][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2017-07-07T19:43:51,945][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline

So, it seems that filebeat is working, logstash reveice the logs but after that never process them and create the index in elasticsearch...

Any ideas?

Thanks!

Your events have the type "beats" and you only send events with the type "SnortIDPS" to the elasticsearch output. Remove type => "beats" from your beats input.

As for the "broken pipe" error message I'm not sure what's up. How often does it happen? Can you capture the traffic and look for clues?

Hi and thanks for your help!

I have change the input config as you requested.
For the tcp issue, it happens quite often:

2017-07-11T18:08:17-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:62064->192.170.4.10:5000: write: broken pipe
2017-07-11T18:08:18-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:62064->192.170.4.10:5000: write: broken pipe
2017-07-11T18:10:12-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:17522->192.170.4.10:5000: write: broken pipe
2017-07-11T18:10:13-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:17522->192.170.4.10:5000: write: broken pipe
2017-07-11T18:11:17-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:54089->192.170.4.10:5000: write: broken pipe
2017-07-11T18:11:18-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:54089->192.170.4.10:5000: write: broken pipe
2017-07-11T18:13:12-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:48880->192.170.4.10:5000: write: broken pipe
2017-07-11T18:13:13-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:48880->192.170.4.10:5000: write: broken pipe
2017-07-11T18:15:17-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:17651->192.170.4.10:5000: write: broken pipe
2017-07-11T18:15:18-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:17651->192.170.4.10:5000: write: broken pipe
2017-07-11T18:18:13-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:30288->192.170.4.10:5000: write: broken pipe
2017-07-11T18:18:14-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:30288->192.170.4.10:5000: write: broken pipe
2017-07-11T18:20:18-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:65417->192.170.4.10:5000: write: broken pipe
2017-07-11T18:20:19-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:65417->192.170.4.10:5000: write: broken pipe
2017-07-11T18:24:14-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:12993->192.170.4.10:5000: write: broken pipe
2017-07-11T18:24:15-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:12993->192.170.4.10:5000: write: broken pipe
2017-07-11T18:25:19-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:31732->192.170.4.10:5000: write: broken pipe
2017-07-11T18:25:20-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:31732->192.170.4.10:5000: write: broken pipe
2017-07-11T18:27:14-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:26312->192.170.4.10:5000: write: broken pipe
2017-07-11T18:27:15-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:26312->192.170.4.10:5000: write: broken pipe
2017-07-11T18:29:20-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:54909->192.170.4.10:5000: write: broken pipe
2017-07-11T18:29:21-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:54909->192.170.4.10:5000: write: broken pipe
2017-07-11T18:31:20-04:00 ERR  Failed to publish events caused by: write tcp 192.170.4.1:6027->192.170.4.10:5000: write: broken pipe
2017-07-11T18:31:21-04:00 ERR  Failed to publish events: write tcp 192.170.4.1:6027->192.170.4.10:5000: write: broken pipe

Here is a tcpdump on the logstash side:

root@pdrsec001:/etc/logstash/conf.d/patterns# tcpdump -A -vvv -i any port 5000
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes


18:37:13.471665 IP (tos 0x0, ttl 64, id 2870, offset 0, flags [DF], proto TCP (6), length 436)
    192.170.4.1.29096 > pdrsec001.digitalrat.lan.5000: Flags [P.], cksum 0x37bb (correct), seq 1537108657:1537109041, ack 1272440067, win 520, options [nop,nop,TS val 3242045221 ecr 107209794], length 384
E....6@.@..........
q...[.j.K.......7......
.=.%.c.B2W....2C...tx^..An.:.Fag'.?z...x.J.8J..h:.....)...l..).......v.v.....W}^,.7.....ne.|.7...4..6K.MU..X..... .N^.......;.E?...;.A.g..C..J.w....CL.....      9.L{....R9.c.CLR.........BS..t..}..b.g/..2...&..;zXhS2.l...s.;.p.t. .....^..Id..!.~/............y}w.P<>..6.u..........J...Q...n[&.*.F.......i...!..b.\..{.       .9.<....{.............<,.C...fu....19Q.. .b.....S.b.E...K7.'W/.....kf\...........
18:37:13.475251 IP (tos 0x0, ttl 64, id 191, offset 0, flags [DF], proto TCP (6), length 58)
    pdrsec001.digitalrat.lan.5000 > 192.170.4.1.29096: Flags [P.], cksum 0x898c (incorrect -> 0xac3d), seq 1:7, ack 384, win 243, options [nop,nop,TS val 107223354 ecr 3242045221], length 6
E..:..@.@......
......q.K...[.l1...........
.d.:.=.%2A....
18:37:13.475482 IP (tos 0x0, ttl 64, id 27559, offset 0, flags [DF], proto TCP (6), length 52)
    192.170.4.1.29096 > pdrsec001.digitalrat.lan.5000: Flags [.], cksum 0xdd6e (correct), seq 384, ack 7, win 520, options [nop,nop,TS val 3242045225 ecr 107223354], length 0
E..4k.@.@.E........
q...[.l1K..     .....n.....
.=.).d.:

A telnet works fine and I got no deny on the pfsense box...

I don't know if it's because of the 6.x beta version of the client on freebsd. I got this config/tutorial from a website and the guy use the 5.x version. But I got to compile it since it's not available anywhere...

Thanks

[2.3.4-RELEASE][admin@pdrfw001.digitalrat.lan]/etc/filebeat: ./filebeat -c filebeat.yml -e -d ""
2017/07/11 22:44:29.269296 beat.go:470: INFO Home path: [/etc/filebeat] Config path: [/etc/filebeat] Data path: [/etc/filebeat/data] Logs path: [/etc/filebeat/logs]
2017/07/11 22:44:29.269348 metrics.go:23: INFO Metrics logging every 30s
2017/07/11 22:44:29.269363 beat.go:495: INFO Beat metadata path: /etc/filebeat/data/meta.json
2017/07/11 22:44:29.269559 beat.go:477: INFO Beat UUID: 8fa5649f-fa22-4810-b750-918b0973844a
2017/07/11 22:44:29.269639 beat.go:239: INFO Setup Beat: filebeat; Version: 6.0.0-alpha3-git877f311
2017/07/11 22:44:29.269688 processor.go:49: DBG Processors:
2017/07/11 22:44:29.269731 beat.go:250: DBG Initializing output plugins
2017/07/11 22:44:29.270251 logger.go:18: DBG start pipeline event consumer
2017/07/11 22:44:29.270430 pipeline.go:71: INFO Publisher name: pdrfw001.digitalrat.lan
2017/07/11 22:44:29.270492 publish.go:108: INFO Publisher name: pdrfw001.digitalrat.lan
2017/07/11 22:44:29.270801 modules.go:95: ERR Not loading modules. Module directory not found: /etc/filebeat/module
2017/07/11 22:44:29.271063 beat.go:315: INFO filebeat start running.
2017/07/11 22:44:29.271252 registrar.go:83: INFO Registry file set to: /etc/filebeat/data/registry
2017/07/11 22:44:29.271392 registrar.go:104: INFO Loading registrar data from /etc/filebeat/data/registry
2017/07/11 22:44:29.272020 registrar.go:115: INFO States Loaded from registrar: 2
2017/07/11 22:44:29.272190 registrar.go:148: INFO Starting Registrar
2017/07/11 22:44:29.272187 filebeat.go:232: WARN Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2017/07/11 22:44:29.272269 crawler.go:43: INFO Loading Prospectors: 1
2017/07/11 22:44:29.272301 sync.go:41: INFO Start sending events to output
2017/07/11 22:44:29.272321 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/07/11 22:44:29.272426 log.go:171: WARN DEPRECATED: input_type prospector config is deprecated. Use type instead. Will be removed in version: 6.0.0
2017/07/11 22:44:29.272960 config.go:137: DBG recursive glob disabled
2017/07/11 22:44:29.273487 processor.go:49: DBG Processors:
2017/07/11 22:44:29.273541 prospector.go:86: DBG exclude_files: []
2017/07/11 22:44:29.273589 state.go:81: DBG New state added for /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:29.273660 state.go:81: DBG New state added for /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:29.273715 prospector.go:107: DBG Prospector with previous states loaded: 2
2017/07/11 22:44:29.273771 prospector.go:77: DBG File Configs: [/var/log/snort/
/alert /var/log/snort/*/alert]
2017/07/11 22:44:29.273815 prospector.go:99: INFO Starting prospector of type: log; id: 8105280962440520348
2017/07/11 22:44:29.273868 crawler.go:73: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017/07/11 22:44:29.273960 prospector.go:113: DBG Start next scan
2017/07/11 22:44:29.274943 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:29.275004 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_em1_vlan536159/alert, offset: 188421
2017/07/11 22:44:29.275055 prospector.go:429: DBG Resuming harvesting of file: /var/log/snort/snort_em1_vlan536159/alert, offset: 188421
2017/07/11 22:44:29.275689 processor.go:49: DBG Processors:
2017/07/11 22:44:29.275789 harvester.go:422: DBG Set previous offset for file: /var/log/snort/snort_em1_vlan536159/alert. Offset: 188421
2017/07/11 22:44:29.275858 harvester.go:412: DBG Setting offset for file: /var/log/snort/snort_em1_vlan536159/alert. Offset: 188421
2017/07/11 22:44:29.275923 harvester.go:324: DBG Update state: /var/log/snort/snort_em1_vlan536159/alert, offset: 188421
2017/07/11 22:44:29.275999 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:29.276047 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_pppoe064290/alert, offset: 14021
2017/07/11 22:44:29.276074 harvester.go:201: INFO Harvester started for file: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:29.276085 prospector.go:474: DBG File didn't change: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:29.276226 prospector.go:134: DBG Prospector states cleaned up. Before: 2, After: 2
2017/07/11 22:44:29.276522 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:30.276910 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:32.289230 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:34.292434 spooler.go:88: DBG Flushing spooler because of timeout. Events flushed: 4
2017/07/11 22:44:34.293129 client.go:203: DBG Publish: {
"@timestamp": "2017-07-11T22:44:29.276Z",
"_event_metadata": {
"Fields": null,
"FieldsUnderRoot": false,
"Tags": [
"snort_ids"
]
},
"message": "07/11/17-18:44:09.574125 ,119,33,1,"(http_inspect) UNESCAPED SPACE IN HTTP URI",TCP,192.170.5.20,29574,192.170.4.2,3128,0,Unknown Traffic,3,",
"offset": 188562,
"prospector": {
"type": ""
},
"source": "/var/log/snort/snort_em1_vlan536159/alert"
}
2017/07/11 22:44:34.293502 processor.go:247: DBG Publish event: {
"@timestamp": "2017-07-11T22:44:29.276Z",
"@metadata": {
"beat": "\u003cnot set\u003e",
"type": "doc"
},
"source": "/var/log/snort/snort_em1_vlan536159/alert",
"beat": {
"hostname": "pdrfw001.digitalrat.lan",
"version": "6.0.0-alpha3-git877f311",
"name": "pdrfw001.digitalrat.lan"
},
"tags": [
"snort_ids"
],
"offset": 188562,
"message": "07/11/17-18:44:09.574125 ,119,33,1,"(http_inspect) UNESCAPED SPACE IN HTTP URI",TCP,192.170.5.20,29574,192.170.4.2,3128,0,Unknown Traffic,3,",
"prospector": {
"type": ""
}
}

2017/07/11 22:44:34.293593 logger.go:29: DBG insert event: idx=0, seq=1
2017/07/11 22:44:34.293654 logger.go:18: DBG active events: 0
2017/07/11 22:44:34.293770 sync.go:45: DBG connect
2017/07/11 22:44:34.293927 logger.go:18: DBG active events: 1
2017/07/11 22:44:34.294010 logger.go:29: DBG no event available in active region
2017/07/11 22:44:34.294171 logger.go:18: DBG active events: 1
2017/07/11 22:44:34.294238 logger.go:29: DBG no event available in active region
2017/07/11 22:44:34.294436 sync.go:133: DBG Try to publish 1 events to logstash with window size 10
2017/07/11 22:44:34.300285 sync.go:101: DBG 1 events out of 0 events sent to logstash. Continue sending
2017/07/11 22:44:34.300375 logger.go:29: DBG ackloop: receive ack [0: 0, 1]
2017/07/11 22:44:34.300425 logger.go:18: DBG handle ACKs: 1
2017/07/11 22:44:34.300471 logger.go:29: DBG try ack index: (idx=0, i=0, seq=1)
2017/07/11 22:44:34.300512 logger.go:29: DBG broker ACK events: count=1, start-seq=1, end-seq=1
2017/07/11 22:44:34.300567 logger.go:18: DBG ackloop: return ack to broker loop:1
2017/07/11 22:44:34.300606 logger.go:18: DBG ackloop: done send ack
2017/07/11 22:44:34.300640 sync.go:70: DBG Events sent: 4
2017/07/11 22:44:34.300667 logger.go:18: DBG active events: 0
2017/07/11 22:44:34.300717 registrar.go:187: DBG Processing 4 events
2017/07/11 22:44:34.300767 logger.go:29: DBG no event available in active region
2017/07/11 22:44:34.300853 registrar.go:173: DBG Registrar states cleaned up. Before: 2, After: 2
2017/07/11 22:44:34.300894 registrar.go:209: DBG Write registry file: /etc/filebeat/data/registry
2017/07/11 22:44:34.307979 registrar.go:234: DBG Registry file updated. 2 states written.
2017/07/11 22:44:36.293921 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:39.293844 prospector.go:137: DBG Run prospector
2017/07/11 22:44:39.293908 prospector.go:113: DBG Start next scan
2017/07/11 22:44:39.293902 spooler.go:88: DBG Flushing spooler because of timeout. Events flushed: 0
2017/07/11 22:44:39.294833 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:39.294899 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_em1_vlan536159/alert, offset: 188562
2017/07/11 22:44:39.294938 prospector.go:472: DBG Harvester for file is still running: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:39.294983 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:39.295030 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_pppoe064290/alert, offset: 14021
2017/07/11 22:44:39.295068 prospector.go:474: DBG File didn't change: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:39.295112 prospector.go:134: DBG Prospector states cleaned up. Before: 2, After: 2
2017/07/11 22:44:44.295871 spooler.go:88: DBG Flushing spooler because of timeout. Events flushed: 0
2017/07/11 22:44:44.295883 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:49.296317 prospector.go:137: DBG Run prospector
2017/07/11 22:44:49.296381 prospector.go:113: DBG Start next scan
2017/07/11 22:44:49.296417 spooler.go:88: DBG Flushing spooler because of timeout. Events flushed: 0
2017/07/11 22:44:49.297238 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:49.297301 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_pppoe064290/alert, offset: 14021
2017/07/11 22:44:49.297340 prospector.go:474: DBG File didn't change: /var/log/snort/snort_pppoe064290/alert
2017/07/11 22:44:49.297386 prospector.go:332: DBG Check file for harvesting: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:49.297436 prospector.go:420: DBG Update existing file for harvesting: /var/log/snort/snort_em1_vlan536159/alert, offset: 188562
2017/07/11 22:44:49.297473 prospector.go:472: DBG Harvester for file is still running: /var/log/snort/snort_em1_vlan536159/alert
2017/07/11 22:44:49.297519 prospector.go:134: DBG Prospector states cleaned up. Before: 2, After: 2
2017/07/11 22:44:54.337139 spooler.go:88: DBG Flushing spooler because of timeout. Events flushed: 0
2017/07/11 22:44:54.337148 log.go:86: DBG End of file reached: /var/log/snort/snort_em1_vlan536159/alert; Backoff now.
2017/07/11 22:44:59.270993 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=5242768 beat.memstats.memory_alloc=2734376 beat.memstats.memory_total=4497208 filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 output.events.acked=1 output.logstash.events.acked=1 output.logstash.publishEvents.call.count=1 output.logstash.read.bytes=6 output.logstash.write.bytes=397 output.write.bytes=397 publish.events=4 publisher.events.count=1 registrar.states.current=2 registrar.states.update=4 registrar.writes=1

Hi,
I have checked yesterday and see that I have made a mistake in the output. During my troubleshooting steps, I have keeped only the beat input and the elasticsearch output with no condition (no if) at all. The data from snort filebeat propector enter the elasticsearch so it'S a good news. Now I see that the the tags field was filled with 2 entries: one that I have set at the filebeat level ("snort_ids") and the other one that was added automatically by the system itself (not sure if it's filebeat or logstash plugin). Then I realize that most of my condition use the "if [tags] == "snort_ids" syntax and so never work. I have change that to "if "snort_ids" in [tags]" and thing seens to work much better. I have not check if the tcp write error gone away because of that but I am pretty sure this help. I will check later today to see if I still see some events and of course if I still see those errors.
Thanks!

Hi,
So there a weird things.. I keep have to tcp write errors but I can see events coming.
Still, my filebat config should read 2 folders that contains the alert file (snort) (/var/log/snort/*/alert), the wildcard is because I have 2 interfaces where snort listen (em1 et pppoe)
For some reason, only the alerts on em1are send to logstash. I can see in the registry file the last line for pppoe:

cat registry
[{"source":"/var/log/snort/snort_pppoe064290/alert","offset":14299,"timestamp":"2017-07-12T19:30:02.031433334-04:00","ttl":-1,"type":"log","FileStateOS":{"inode":9791543,"device":89}},{"source":"/var/log/snort/snort_em1_vlan536159/alert","offset":311094,"timestamp":"2017-07-12T20:37:12.643164319-04:00","ttl":-1,"type":"log","FileStateOS":{"inode":9791434,"device":89}}]

And it never reach logstash for what I see...

Ok seems to work after all, forget my previous message :wink:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.