Problem getting Windows Event log inside ELK

Hey.
We're test ELK for for accumulating logs from various systems.
We're using ELK on Windows and nxlog to forward logs from the Windows servers to the ELK server.
Getting IIS logs works fine.
Once we're starting forward Windows event log - the Kibana gives us error like this:

The config that we're using:

nxlog.conf:

define ROOT C:\Program Files (x86)\nxlog
define ROOT_STRING C:\Program Files (x86)\nxlog

Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log

Module xm_charconv AutodetectCharsets utf-8, euc-jp, utf-16, utf-32, iso8859-2 Module xm_json

Windows Event Log

Module im_msvistalog SavePos FALSE ReadFromLast TRUE PollInterval 2 Query \ \ *\ \ Exec convert_fields("AUTO", "utf-8"); Exec to_json(); Module om_tcp Host ELK.local Port 10511

<Route 1>
Path Win_Eventlog => out


logstash.conf:

input {
udp {
port => 5000
codec => plain { charset => "UTF-8" }
type => "log4net-yellow"
}

udp {
	port => 5001
	type => "syslog"
}

udp { 
	port => 5002 
	codec => plain { charset => "UTF-8" } 
	type => "log4net-blue" 
}

udp { 
	port => 5003 
	codec => plain { charset => "UTF-8" } 
	type => "kpi" 
}

tcp {
	port => 10511
	codec => json
	type => "Win_Eventlog-1"
}
 beats {
    port => 20515
  }

}

filter {
if [type] == "log4net-yellow" {
grok {
patterns_dir => "../../patterns-1-master"
remove_field => message
match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} [%{NUMBER:threadid}] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{GREEDYDATA:tempMessage}" }
}

	if !("_grokparsefailure" in [tags]) {
		mutate {
			replace => [ "message" , "%{tempMessage}" ]
			replace => [ "host" , "%{tempHost}" ]
		}
	}
	mutate {
		remove_field => [ "tempMessage" ]
		remove_field => [ "tempHost" ]
		}
	}
	
if [type] == "log4net-blue" {
	grok {
		patterns_dir => "../../patterns-1-master"
		remove_field => message
		match => { message => "(?m)%{TIMESTAMP_ISO8601:sourceTimestamp} \[%{NUMBER:threadid}\] %{LOGLEVEL:loglevel} +- %{IPORHOST:tempHost} - %{GREEDYDATA:tempMessage}" }
	}

	if !("_grokparsefailure" in [tags]) {
		mutate {
			replace => [ "message" , "%{tempMessage}" ]
			replace => [ "host" , "%{tempHost}" ]
		}
	}
	mutate {
		remove_field => [ "tempMessage" ]
		remove_field => [ "tempHost" ]
		}
	}
	

if [type] == "Win_Eventlog-1" {
    if [SourceModuleName] == "eventlog" {
        mutate {
            replace => [ "message", "%{Message}" ]
        	    }
        mutate {
            remove_field => [ "Message" ]
        	   }
    				     }
}
				}

output {
elasticsearch {
hosts => ["localhost:9200"]
}
}


I've also tryed to send logs using:
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
no json just: Exec convert_fields("AUTO", "utf-8");

and put in logstash.conf:
codec => json_lines { charset => CP1252 }
codec => plain { charset => "UTF-8" }

Always the same result....

What goes wrong?

Many thanks!!!!
Gennady

Check your Kibana logs for more details.

In order to isolate the issue and not to mess with the ELK that is working good for IIS - i've installed separate ELK instance just to work on windows events. The instance is up and running and logstash accepts data fron remote nxlog client.
The kibana error still the same.
Attached links to stdout from logstash and kibana.

https://drive.google.com/file/d/0B5naKByXjZEvZ2VCSG5iRHVrS2s/view?usp=sharing

https://drive.google.com/file/d/0B5naKByXjZEvNlBGLTZLc2Y2eFU/view?usp=sharing

The logstash message looks good (i think).

Still not getting why kibana is doing this...