Logstash showing only parse failures from log

Hi All,
not sure what's happening on my stack, but yesterday I've included a new log file on the filebeat config on a couple of client machines.
Everything was working fine, but including this new log file I'm experiencing a really weird behaviour: the only messages showing up in Kibana are the ones with tags:_grokparsefailure (about 1% of the log entries).
The rest of the log file is simply not showing up...

On the client

On the client I've got filebeat 1.2.2, here's the filebeat.yml extract I've added:

  paths:
    - /mypath/mylog.log
  document_type:
    MY_TYPE
  input_type: log
-

On the ELK server

Elasticsearch 2.4.1
Logstash 2.2.4
New file in /etc/logstash/conf.d:

filter {
if [type] == "MY_TYPE" {
grok {
match => { 'message' => [ '%{TIME:my_timestamp}%{SPACE}%{WORD:Severity}%{SPACE}[%{DATA:Thread}]%{SPACE}(%{DATA:my_Component})%{SPACE}%{GREEDYDATA:my_Message}' ] }
}
date {
match => [ "my_timestamp", "HH:mm:ss,SSS" ]
timezone => [ "Europe/London" ]
}
}
}

Did any of you ever face this kind of issue?

Best regards!

Wild guess: The new events you're trying to send are incompatible with the mappings in Elasticsearch. If that's the case you should find details in the Logstash log.

Tthanks for the quick feedback, I've given it a look but nothing is being trapped in there.
Any suggestion on how to increase its verbosity?

I'd expect that to be logged with the default log level, but starting Logstash with --verbose or even --debug will increase the log level.

Ok, there's something when running in --debug, I can see the messages incoming so at least I am now 100% sure filebeat has no guilt in this.
Yet, I'm not seeing any error trapped, it doesn't complain anywhere about mappings in elasticsearch, I'm even seeing the assignment of data to fields (even if reading the debug output is quite dispersive).
Should I look for some specific string which may look harmless at a first sight?