VMware and syslog

Hi,
we have a VMware infrastructure and have been using loginsight to process and visualize logs. However as we move forward we want to use the functionality of ELK stack to collect, parse and display VMware syslog events in Kibana.

I have installed the ELK stack and can view basic items on kibana.
We are not using logstash-forwarder or filebeats .

At the moment I am looking at the task of how to configure input/filter/output file in such a way as we can get a valuable kibana dashboard for VMware/ESXi outputs.

Has anyone done this already, can these scripts or procedures be shared. ?

Thanks,
Daragh

What file are you referring to here?

hi Mark,
Thanks very much for getting back to us on this.

I didn't put all of the below configuration file in this message since it is repetitive based on the different types of faults.

we have this path :- /etc/logstash/conf.d
And at that location we have this conf file:-

[root@logstash221 conf.d]# cat logstash-syslog.conf
input {
udp {
port => 514
type => esxi
}

}

filter {
if [type] == "esxi" or "lumberjack" in [tags] {
grok {
break_on_match => true
match => [
"message", "<%{POSINT:syslog_pri}>%{TIMESTAMP_ISO8601:@timestamp} %{SYSLOGHOST:hostname} %{SYSLOGPROG:message_program}: (?

<body_type_1>(?<message_body>(?<message_system_info>(?:[%{DATA:message_thread_id} %{DATA:syslog_level} '%{DATA:message_service}'\ ?%{DATA:message_opID}]))

[%{DATA:message_service_info}]\ (?<message_syslog>(%{GREEDYDATA}))))",
"message", "<%{POSINT:syslog_pri}>%{TIMESTAMP_ISO8601:@timestamp} %{SYSLOGHOST:hostname} %{SYSLOGPROG:message_program}: (?

<body_type_2>(?<message_body>(?<message_system_info>(?:[%{DATA:message_thread_id} %{DATA:syslog_level} '%{DATA:message_service}'\ ?%{DATA:message_opID}])) (?<message_syslog>

(%{GREEDYDATA}))))",
"message", "<%{POSINT:syslog_pri}>%{TIMESTAMP_ISO8601:@timestamp} %{SYSLOGHOST:hostname} %{SYSLOGPROG:message_program}: (?

<body_type_3>(?<message_body>%{GREEDYDATA:message_syslog}))",
"message", "<%{POSINT:syslog_pri}>.?\s\r\t[\s].?(?<message_program>[a-zA-

Z0-9-[]_]{3,})[:][\s](?<body_type_6>(?<message_body>(?<message_syslog>.)))",
"message", "(?<body_type_7>(?<message_body>(?<message_debug>.
)))"
]
}
if [message] =~ /(?i)warning|error|fault|ALERT|busy|Failed|[\s]dead|[\s]space|esx.|vob.|com.vmware|nmp|volume|consolidate|FS3|question|ha-eventmgr|VisorFS|Fil3|DLX|MCE|HBX|MPN|

mpn|p2m|Reset|timeout|msg./ and [message] !~ /(?i)crossdup|hostprofiletrace/{

                    if [message] =~ /(?i)vmkwarning:/ and [message] !~ /(?i)crossdup|performance|LinuxCharWrite/{
                    # <181>2014-12-18T18:30:36.400Z esx.vmware.com vmkwarning: cpu29:4317)WARNING: vmw_psp_rr: psp_rrSelectPathToActivate:972:Could not select path for device 

"naa.60002ac000000000000004b00000d155".
mutate {
add_tag => "vmkwarning"
}
}
if [message] =~ /(?i)ALERT:/{
# <181>2014-12-17T07:50:52.629Z esx.vmware.com vmkernel: cpu9:8942)ALERT: URB timed out - USB device may not respond
mutate {
add_tag => "achtung"
add_field => { "alert" => "ALERT" }
}
} else if [message] =~ /(?i)[\s]dead/ {
# <166>2014-09-15T14:52:23.782Z esx.vmware.com Hostd: [77381B90 error 'Default'] Unable to build Durable Name dependent properties: Unable to query VPD pages

repetitive edited/removed section since can only put in too big a reply here...

output {
elasticsearch {

if [type] == "esxi" and "vmkwarning" in [tags] {
file { path => "/var/log/vmkwarning-%{+YYYY-MM-dd}" }

else if [type] == "esxi" and "achtung" in [tags] {
file { path => "/var/log/achtung-%{+YYYY-MM-dd}" }

else if [type] == "esxi" and "alert" in [tags] {
file { path => "/var/log/alert-%{+YYYY-MM-dd}" }

else if [type] == "esxi" and "failed_to" in [tags] {
file { path => "/var/log/failed_to-%{+YYYY-MM-dd}" }

else if [type] == "esxi" and "iorm" in [tags] {
file { path => "/var/log/iorm-%{+YYYY-MM-dd}" }

else [type] == "esxi" and "_grokparsefailure" in [tags] {
file { path => "/var/log/failed_syslog_events-%{+YYYY-MM-dd}" }

}
}

We have alarms such as the below text comeing into this server over port 514 from ESXi hosts:-

"*** CRITICAL *** Storage: All Paths Down (APD)":

And we want to build a dashboard based on the syslog messages comeing into this server from the VMware ESXi hosts
Unfortunately at the moment we get no outputs, even from the last line of the output section

else [type] == "esxi" and "_grokparsefailure" in [tags] {
file { path => "/var/log/failed_syslog_events-%{+YYYY-MM-dd}" }

Which is meant to catch all the items not caught be the former output lines.

br,
Daragh

There's a lot happening here so start with the basics.

Are events making it through the pipeline? Are they in the right format.

Hi Mark,
we know ESXi logs are comeing through since we changed the IP of syslog field
in ESXi to aim at this server.

In addition when we used the below simpler script that caught all the failures with
the below output block, then we could build basic dashboard of these failures.

But we need more detail just catching all the failures as a test, instead we need to
be able to re-write the block to get and visualize the ESXi syslog events.

So yes events are comeing in for sure constantly.
And they are in correct format since can visualize some basic ones as a trial as below.

br,
Daragh

[root@logstash221 conf.d]# cat logstash-syslog.conf
input {
udp {
type => syslog
port => 514
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp}

%{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:

%{GREEDYDATA:syslog_message}" }
}
}
}
output {
elasticsearch {
}
if [type] == "syslog" and "_grokparsefailure" in [tags] {
file { path => "/var/log/failed_syslog_events-%{+YYYY-MM-dd}" }
}
}
[root@logstash221 conf.d]#

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.