Logstash vs winlogbeats from external machine

I've been throught the posts aut have yet to find a solution.

i've opend port 5044 on firewall on both sending client and receving server. I can telnet on port 5044 to the logstash server on the 5044 port.

but i get this in the winbeatlog file

remote machine winlogbeat file:

17T12:16:16.1829183Z","uptime":"1m30.0542879s","uptime_ms":"90054287"}
2017-05-17T14:18:16+02:00 INFO Non-zero metrics in the last 30s: uptime={"server_time":"2017-05-17T12:18:16.2376954Z","start_time":"2017-05-17T12:16:16.1829183Z","uptime":"2m0.0547771s","uptime_ms":"120054777"}


2017-05-17T14:18:27+02:00 ERR Failed to publish events caused by: write tcp 172.16.128.11:11937->172.17.20.201:5044: wsasend: An existing connection was forcibly closed by the remote host.
2017-05-17T14:18:27+02:00 INFO Error publishing events (retrying): write tcp 172.16.128.11:11937->172.17.20.201:5044: wsasend: An existing connection was forcibly closed by the remote host.


2017-05-17T14:18:28+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 1 events
2017-05-17T14:18:29+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 1 events
2017-05-17T14:18:46+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=3 libbeat.logstash.publish.read_bytes=12 libbeat.logstash.publish.write_bytes=1974 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=2 libbeat.logstash.published_but_not_acked_events=1 libbeat.publisher.published_events=2 msg_file_cache.Microsoft-Windows-Sysmon/OperationalHits=2 published_events.Microsoft-Windows-Sysmon/Operational=2 published_events.total=2 uptime={"server_time":"2017-05-17T12:18:46.2367336Z","start_time":"2017-05-17T12:16:16.1829183Z","uptime":"2m30.0538153s","uptime_ms":"150053815"}
2017-05-17T14:19:15+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 1 events

logstash is running fine on the receving machine and indexing firewall logs like a champ.
if i try from the local machine with elastic search logstash and kibana and install winlogbeat, i also does not work. However if i set winlogbeat to send it directly to elasticsearch in the ie. localhost:9200 boom there is data.?

localhost winlogbeat log

libbeat.logstash.publish.write_bytes=2927 libbeat.logstash.published_but_not_acked_events=2 published_events.total=3
2017-05-17T14:15:56+02:00 INFO No non-zero metrics in the last 30s


2017-05-17T14:16:15+02:00 ERR Failed to publish events caused by: write tcp 127.0.0.1:63796->127.0.0.1:5044: wsasend: An existing connection was forcibly closed by the remote host.
2017-05-17T14:16:15+02:00 INFO Error publishing events (retrying): write tcp 127.0.0.1:63796->127.0.0.1:5044: wsasend: An existing connection was forcibly closed by the remote host.


2017-05-17T14:16:16+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 2 events
2017-05-17T14:16:26+02:00 INFO Non-zero metrics in the last 30s: published_events.Microsoft-Windows-Sysmon/Operational=2 libbeat.logstash.publish.read_bytes=12 libbeat.logstash.published_and_acked_events=2 published_events.total=2 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.published_but_not_acked_events=2 libbeat.logstash.publish.write_errors=1 libbeat.logstash.publish.write_bytes=1648 libbeat.publisher.published_events=2 msg_file_cache.Microsoft-Windows-Sysmon/OperationalHits=2
2017-05-17T14:16:56+02:00 INFO No non-zero metrics in the last 30s
2017-05-17T14:17:13+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 2 events
2017-05-17T14:17:18+02:00 INFO EventLog[Microsoft-Windows-Sysmon/Operational] Successfully published 3 events
2017-05-17T14:17:26+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.write_bytes=3232 libbeat.publisher.published_events=5 libbeat.logstash.publish.read_bytes=12 libbeat.logstash.call_count.PublishEvents=2 published_events.total=5

what gives

I have priviously tried with xpack installed and got no where so i moved back to my "old" setup as here the firewall log input into logstash was running. same result as the ELK with xpack activated.

firewall on the ELK is open on 5044 tcp/udp

netstat outbut on server with logstash

C:\Elk\nssm\win64>netstat -np TCP | find "5044"
TCP 127.0.0.1:50439 127.0.0.1:50440 TIME_WAIT
TCP 127.0.0.1:50440 127.0.0.1:50441 TIME_WAIT
TCP 127.0.0.1:50441 127.0.0.1:50442 TIME_WAIT
TCP 127.0.0.1:50442 127.0.0.1:50443 TIME_WAIT
TCP 127.0.0.1:50443 127.0.0.1:50444 TIME_WAIT
TCP 127.0.0.1:50444 127.0.0.1:50445 TIME_WAIT
TCP 127.0.0.1:50445 127.0.0.1:50446 TIME_WAIT
TCP 127.0.0.1:50446 127.0.0.1:50447 TIME_WAIT
TCP 127.0.0.1:50447 127.0.0.1:50448 TIME_WAIT
TCP 127.0.0.1:50448 127.0.0.1:50449 TIME_WAIT
TCP 127.0.0.1:50449 127.0.0.1:50450 TIME_WAIT
TCP 127.0.0.1:55043 127.0.0.1:55044 TIME_WAIT
TCP 127.0.0.1:55044 127.0.0.1:55045 TIME_WAIT
TCP 127.0.0.1:65043 127.0.0.1:65044 TIME_WAIT

TCP XXX.17.20.XXX:5044 XXX.16.XXX.11:12098 ESTABLISHED

the shown connection is the remote machine

i have a feeling im missing something totally obvious here :frowning:

should i post the logstash/winlogbeat config files too?

Make sure your SSL configuration is symmetrical, i.e. that it's either enabled in both Logstash and Winlogbeat or disabled in both places. Yes, posting your configuration files would be helpful. Make sure you format them as preformatted text (e.g. using the </> toolbar button).

hi magnus,

i solved it. it turned out i made a mistake in the output part of the logstash config. i initially change the input to tcp instead on beats in logstash - and this got the data into elasticsearh - total gibberish but none the less data. I then took a barebone beats logstash config and got readable data into logstash. following this i made the changes to the output section here is the working config

input {
#sysmon via winlogbeat
beats {
port => 5044
}

#firewall logs
udp {
port => 514
type => "cisco-asa"
}
#netscaler logs
#udp {
#port => 600
#type => "Netscaler"

}

}
#########################
filter {

netscaler filter

if [type] == "Netscaler" {
grok {
break_on_match => true
match => [
"message", "<%{POSINT:syslog_pri}> %{DATE_US}:%{TIME} GMT %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:netscaler_message} : %{DATA} %{IP:source_ip}:%{POSINT:source_port} - %{DATA} %{IP:vserver_ip}:%{POSINT:vserver_port} - %{DATA} %{IP:nat_ip}:%{POSINT:nat_port} - %{DATA} %{IP:destination_ip}:%{POSINT:destination_port} - %{DATA} %{DATE_US:DELINK_DATE}:%{TIME:DELINK_TIME} GMT - %{DATA} %{POSINT:total_bytes_sent} - %{DATA} %{POSINT:total_bytes_recv}",
"message", "<%{POSINT:syslog_pri}> %{DATE_US}:%{TIME} GMT %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:netscaler_message} : %{DATA} %{IP:source_ip}:%{POSINT:source_port} - %{DATA} %{IP:destination_ip}:%{POSINT:destination_port} - %{DATA} %{DATE_US:START_DATE}:%{TIME:START_TIME} GMT - %{DATA} %{DATE_US:END_DATE}:%{TIME:END_TIME} GMT - %{DATA} %{POSINT:total_bytes_sent} - %{DATA} %{POSINT:total_bytes_recv}",
"message", "<%{POSINT:syslog_pri}> %{DATE_US}:%{TIME} GMT %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:netscaler_message} : %{DATA} %{INT:netscaler_spcbid} - %{DATA} %{IP:clientip} - %{DATA} %{INT:netscaler_client_port} - %{DATA} %{IP:netscaler_vserver_ip} - %{DATA} %{INT:netscaler_vserver_port} %{GREEDYDATA:netscaler_message} - %{DATA} %{WORD:netscaler_session_type}",
"message", "<%{POSINT:syslog_pri}> %{DATE_US}:%{TIME} GMT %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:netscaler_message}"
]
}

       }

###cisco asa filter
if [type] == "cisco-asa" {

Split the syslog part and Cisco tag out of the message

grok {
  match => ["message", "%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}"]  
      }     

Parse the syslog severity and facility

syslog_pri { }

Parse the date from the "timestamp" field to the "@timestamp" field

date {
  match => ["timestamp",
    "MMM dd HH:mm:ss",
    "MMM  d HH:mm:ss",
    "MMM dd yyyy HH:mm:ss",
    "MMM  d yyyy HH:mm:ss"
  ]
  timezone => "Europe/Paris"
}
# Clean up redundant fields if parsing was successful
if "_grokparsefailure" not in [tags] {
  mutate {
    rename => ["cisco_message", "message"]
    remove_field => ["timestamp"]
  }
}
# Extract fields from the each of the detailed message types
# The patterns provided below are included in Logstash since 1.2.0
grok {
  match => [
    "message", "%{CISCOFW106001}",
    "message", "%{CISCOFW106006_106007_106010}",
    "message", "%{CISCOFW106014}",
    "message", "%{CISCOFW106015}",
    "message", "%{CISCOFW106021}",
    "message", "%{CISCOFW106023}",
    "message", "%{CISCOFW106100}",
    "message", "%{CISCOFW110002}",
    "message", "%{CISCOFW302010}",
    "message", "%{CISCOFW302013_302014_302015_302016}",
    "message", "%{CISCOFW302020_302021}",
    "message", "%{CISCOFW305011}",
    "message", "%{CISCOFW313001_313004_313008}",
    "message", "%{CISCOFW313005}",
    "message", "%{CISCOFW402117}",
    "message", "%{CISCOFW402119}",
    "message", "%{CISCOFW419001}",
    "message", "%{CISCOFW419002}",
    "message", "%{CISCOFW500004}",
    "message", "%{CISCOFW602303_602304}",
    "message", "%{CISCOFW710001_710002_710003_710005_710006}",
    "message", "%{CISCOFW713172}",
    "message", "%{CISCOFW733100}"
  ]
}

}

}

output section

output {

elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
user => "elastic"
password => "changeme"
}

if [type] == "cisco-asa" {
elasticsearch { hosts => ["localhost:9200"]
index => "cisco-asa-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
}
if [type] == "Netscaler" {
elasticsearch { hosts => ["localhost:9200"]
index => "netscaler-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.