Can't get any log from one special filter

The issue: I have one filter which is responsible to process Nginx error logs. I set it up the same way as the access log filter. But! When "access" filter parsing logs well, error filter just skips logs. There is no _grokparsefailure record in Elasticsearch, no one sign logs coming, when they're absolutely coming.

Configuration file:

filter {

    if [type] == "nginx_error" {
        grok {
            patterns_dir => "/etc/logstash/patterns"
            match => { "message" => "%{NGINX_ERROR}" }
            named_captures_only => true
        }
        date {
                match => [ "timestamp", "yyyy/MM/dd HH:mm:ss" ]
        }
        geoip {
	      source => "nginx_clientip"
	      target => "geoip"
	      database => "/etc/logstash/GeoLiteCity.dat"
	      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
	      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
		}

		
	    mutate {
	      convert => [ "[geoip][coordinates]", "float"]
	    }

    }

if [type] == "nginx_access" {
	grok {
		add_tag => [ "valid" ]
		patterns_dir => "/etc/logstash/patterns"
		match => { "message" => "%{NGINX}" }
		named_captures_only => true
	}
	date {
		match => [ "timestamp", "yyyy-MM-dd HH:mm:ss" ]
	}
	geoip {
    	source => "nginx_clientip"
		target => "geoip"
      	database => "/etc/logstash/GeoLiteCity.dat"
      	add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      	add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
  	}

	if "valid" not in [tags] {            
		drop { }
	}

    mutate {
      convert => [ "[geoip][coordinates]", "float"]
      remove_tag => [ "valid" ]
    }
}}

Patterns file in directory /etc/logstash/patterns:

NGINX %{IPORHOST:nginx_clientip} %{USER:nginx_user_ident} %{USER:nginx_user_auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:nginx_method} %{URIPATHPARAM:nginx_request_address}(?: HTTP/%{NUMBER:nginx_http_version})?|-)" %{NUMBER:nginx_response} (?:%{NUMBER:nginx_bytes}|-) "(?:%{URI:nginx_referrer}|-)"%{GREEDYDATA}
NGINX_ERROR %{DATESTAMP:timestamp} \[%{WORD:severity}\] %{INT:pid}\#%{INT:tid}: \*%{INT:cid} %{DATA:log_message}, client: %{IPORHOST:nginx_clientip}, server: (?:%{DATA:nginx_server_name}|), request: "%{WORD:nginx_method} %{DATA:nginx_request_address} HTTP/%{NUMBER:nginx_http_version}", host: "%{IPORHOST:nginx_host}"

The one sign: I always get in Elasticsearch logs this strange message.

Aug 29 12:59:47 elk elasticsearch[2949]: [2016-08-29 12:59:47,187][INFO ][cluster.metadata         ] [Urthona] [filebeat-0015.11.11] update_mapping [nginx_error]

And also nginx_error skips a couple of messages to Elasticsearch, they're out of format. Here is example what has been passed.

 2016/08/29 13:09:32 [notice] 11473#11473: signal process started
 2016/08/16 13:06:58 [emerg] 20772#20772: invalid parameter "http://127.0.0.1:8080" in /etc/nginx/sites-enabled/myhost:23

I busied my brains trying to figure out what's wrong. Please, if anybody faced with this stupid, but strange issue, please, let me know.

Reduce the complexity. Replace the elasticsearch output with a stdout { codec => rubydebug } output. Are you seeing any nginx_error events in Logstash's stdout stream (typically /var/log/logstash/logstash.stdout)?

Yes, clearly see this kind of events, although I didn't change configuration since the moment I posted here.
Here is the event example, by the way.

   {
                  "message" => "2016/08/30 16:18:52 [error] 1111#1111: *6537 user \"t\" was not found in \"/etc/nginx/conf.d/.htpasswd\", client: 70.95.36.8, server: qa.example.com, request: \"GET / HTTP/1.1\", host: \"qa.example.com\"",
                 "@version" => "1",
               "@timestamp" => "0016-08-30T16:18:52.000Z",
                   "offset" => 1340,
                    "count" => 1,
                   "fields" => {
        "instance_id" => "i-813d8615"
    },
                   "source" => "/var/log/nginx/qa.example.com/error.log",
                     "type" => "nginx_error",
               "input_type" => "log",
                     "beat" => {
        "hostname" => "qa.example.com",
            "name" => "qa.example.com"
    },
                     "host" => "qa.example.com",
                     "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
                "timestamp" => "16/08/30 16:18:52",
                 "severity" => "error",
                      "pid" => "1111",
                      "tid" => "1111",
                      "cid" => "6537",
              "log_message" => "user \"t\" was not found in \"/etc/nginx/conf.d/.htpasswd\"",
           "nginx_clientip" => "70.95.36.8",
        "nginx_server_name" => "qa.example.com",
             "nginx_method" => "GET",
    "nginx_request_address" => "/",
       "nginx_http_version" => "1.1",
               "nginx_host" => "qa.example.com",
                    "geoip" => {
                      "ip" => "70.95.36.8",
           "country_code2" => "RU",
           "country_code3" => "RUS",
            "country_name" => "Russian Federation",
          "continent_code" => "EU",
             "region_name" => "57",
               "city_name" => "Penza",
             "postal_code" => "440961",
                "latitude" => 53.20070000000001,
               "longitude" => 15.00460000000001,
                "timezone" => "Europe/Samara",
        "real_region_name" => "Example",
                "location" => [
            [0] 15.00460000000001,
            [1] 53.20070000000001
        ],
             "coordinates" => [
            [0] 15.00460000000001,
            [1] 53.20070000000001
        ]
    }
}

After Logstash restart coming back to Elasticsearch output I initialized the same error on the node and didn't get any feedback from Logstash. Could it be Elasticsearch issue?

           "@timestamp" => "0016-08-30T16:18:52.000Z",

This timestamp is obviously wrong, so unless you set Kibana to issue a query that spans over 2000 years you're not going to see your events.

Exactly!

Problem I found was in NGINX_ERROR pattern. The DATESTAMP field returns only last 2 figures of year. So, even if year 2016, DATESTAMP doesn't return error, but converts the number into 2-figure value.
The timestamp on Logstash configuration expected 4 figures. So, here we go with 16 year from the time when Jesus was born, lol.
I made the same mistake with Hashicorp Consul logs, testing pattern, but not testing values it returns.

Eventually, my workaround with nginx_error.

Patterns:

NGINX %{IPORHOST:nginx_clientip} %{USER:nginx_user_ident} %{USER:nginx_user_auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:nginx_method} %{URIPATHPARAM:nginx_request_address}(?: HTTP/%{NUMBER:nginx_http_version})?|-)" %{NUMBER:nginx_response} (?:%{NUMBER:nginx_bytes}|-) "(?:%{URI:nginx_referrer}|-)"%{GREEDYDATA}
NGINX_ERROR_TIMESTAMP %{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}
NGINX_ERROR %{NGINX_ERROR_TIMESTAMP:timestamp} \[%{WORD:severity}\] %{INT:pid}\#%{INT:tid}: \*%{INT:cid} %{DATA:log_message}, client: %{IPORHOST:nginx_clientip}, server: (?:%{DATA:nginx_server_name}|), request: "%{WORD:nginx_method} %{DATA:nginx_request_address} HTTP/%{NUMBER:nginx_http_version}", host: "%{URIHOST:nginx_host}"

Config (nginx_access with failed datestamp corrected also):

filter {

    if [type] == "nginx_error" {
        grok {
        	add_tag => [ "valid" ]
            patterns_dir => "/etc/logstash/patterns"
            match => { "message" => "%{NGINX_ERROR}" }
            named_captures_only => true
        }
        date {
                match => [ "timestamp", "yyyy/MM/dd HH:mm:ss" ]
        }
        geoip {
	      source => "nginx_clientip"
	      target => "geoip"
	      database => "/etc/logstash/GeoLiteCity.dat"
	      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
	      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
		}

		if "valid" not in [tags] {            
    		drop { }
  		}
		
	    mutate {
	      convert => [ "[geoip][coordinates]", "float"]
	      remove_tag => [ "valid" ]
	    }

    }

if [type] == "nginx_access" {
	grok {
		add_tag => [ "valid" ]
		patterns_dir => "/etc/logstash/patterns"
		match => { "message" => "%{NGINX}" }
		named_captures_only => true
	}
	date {
		match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
	}
	geoip {
    	source => "nginx_clientip"
		target => "geoip"
      	database => "/etc/logstash/GeoLiteCity.dat"
      	add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      	add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
  	}

	if "valid" not in [tags] {            
		drop { }
	}

    mutate {
      convert => [ "[geoip][coordinates]", "float"]
      remove_tag => [ "valid" ]
    }
  }
}

Now everything works fine with no any error. Thanks a lot, greatly appreciated for your help and your attentive eyes!