"ERR failed to initialize logstash plugin... key file not configured"

Oh I see, you are using the wrong input plugin (protocol is based on lumberjack, but there a subtle differences in content type). For beats you should use the beats input plugin.

Depending on logstash version you have installed you will need to:

bin/plugin install logstash-input-beats

or

bin/plugin update logstash-input-beats

There is a bug in shipped beats plugin being fixed in recent versions.

The beats plugin config is documented here:
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html

Note: you can basically copy the lumberjack config, but add 'ssl => true' to enable TLS.

$ sudo ./plugin install logstash-input-beats
Validating logstash-input-beats
Installing logstash-input-beats
Installation successful

$ sudo systemctl restart logstash

Looks good after that, then changed input config:
input {
beats {
port => 5000
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

Now I get this error:
timestamp=>"2015-11-03T15:11:56.644000-0600", :message=>"Beats input: unhandled exception", :exception=>#<TypeError: The field '@timestamp' must be a (LogStash::Timestamp, not a String (2015-11-03T21:11:41.345Z)>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/event.rb:138:in []='", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:138:increate_event'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:138:increate_event'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-plain-2.0.2/lib/logstash/codecs/plain.rb:35:in decode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:136:increate_event'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:150:in invoke'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:370:in data'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:349:inread_socket'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:361:inack_if_needed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:345:in read_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:246:injson_data_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:163:in feed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:296:incompressed_payload'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:163:in feed'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:330:inread_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/lumberjack/beats/server.rb:315:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-0.9.2/lib/logstash/inputs/beats.rb:150:ininvoke'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-0.9.1-java/lib/concurrent/executor/executor_service.rb:515:inrun'", "Concurrent$$JavaExecutorService$$Job_1733552081.gen:13:in `run'"], :level=>:error}

from error log I see logstash-input-beats version 0.9.2 being installed. No idea why. Update to most recent and issue should be resolved (it's a known bug in 0.9.2).

plugins are installed via ruby gems. See official [logstash-input-beats gem] (https://rubygems.org/gems/logstash-input-beats).
No idea why install did not get you the most recent.

to update to most recent version run:

bin/plugin update logstash-input-beats

Which logstash version are you using. I think you will need at least Logstash 1.5.4 .

The most current release. 2.0ohhhhhhhh. =p

I'll try update to see if that forces it to a newer version. If not I'll try a workaround to get 0.9.3

On a side note if I send ~1300 events at once using logstash-forwarder logstash ends up dropping events. On the other hand if I point my ESXi hosts that support our (very large) VDI deployment at it it never drops a single event (over the course of a few days of testing) and the event rate for these hosts is 1000+ each at peak times. That's nearly 10,000 messages per second without a single one being dropped. I'm guessing that's due to lumberjack plugin? I've disabled the ESXi hosts' syslog settings that point to this testing box for now while you guys help me troubleshoot this but I thought it was interesting.

Using update worked:
# ./plugin update logstash-input-beats
Updating logstash-input-beats
Updated logstash-input-beats 0.9.2 to 0.9.3

I'll let you know how it goes when I fire off another test after lunch. Thanks so much for the help!

UPDATE:
That solves the Beats plugin problem and I can now ship logs to logstash using Filebeat! Although the nightly I was using from 2015-11-02 was only sending offset data and no messages. Updating to the 2015-11-04 nightly fixed that. Now I have to tweak my filter (again) to work with Filebeat. Looks like it is NOT dropping messages now =D

Not dropping messages sounds great. Unfortunately I don't know Logstash well enough to help out with dropping messages in logstash. How do you know about the messages being dropped.

Both, logstash-forwarder and filebeat have send-at-leat-once semantics (so no logs will be lost). But once confirmed by Logstash, filebeat has no control about events being lost somewhere later in the processing chain.

But as far as I know Logstash itself generates back pressure if it can not process events in time. Plus lumberjack uses timeouts to retry sending events (to guarantee nothing is lost).

I was getting something along the lines of timeout sending message dropping: MESSAGEBODY and, indeed, that message was not in ES. I'm not sure why logstash-forwarder transported events were frequently dropped while being transported to ES but filebeat sent events are not. Believe me when I say it is/was perplexing.

@steffens @ruflin Spoke too soon. Logstash just isn't generating any messages that it is dropping events when I use the beats plugin.

From the filebeat debug log:
$ grep "source" filebeat | grep -v sourcename | wc -l
7974

That is the correct number of events that it should send.

Each event will have a single DOY field so I enabled debugging on logstash and ran the files through again after deleting the C:\ProgramData\filebeat folder:
$ grep "source" filebeat | grep -v sourcename | wc -l
7974
$ grep '"DOY" =>' logstash.stdout | wc -l
484

That's missing 7490 events...

Only six events are recorded in /var/log/logstash/logstash.log and they are all similar to this.

{:timestamp=>"2015-11-06T14:49:46.442000-0600", :message=>"Failed parsing date from field", :field=>"Date", :value=>"2015-10-05 %{Hour}:%{Minute}", :exception=>"Invalid format: "2015-10-05 %{Hour}:%{Minute}"", :config_parsers=>"yyyy-MM-dd HH:mm", :config_locale=>"default=en_US", :level=>:warn}

This is expected since it is because the file is a csv and the first line doesn't match the filter. There should be far more than 6 events like this.

After this I cleared the the logs and ES:
# echo $null > /var/log/logstash.log

echo $null > /var/log/logstash.stdout

curl -XDELETE 'http://localhost:9200/*'

With logstash-forwarder sending the events I checked logstash.log and see:

{:timestamp=>"2015-11-06T15:00:49.869000-0600", :message=>"too many attempts at sending event. dropping: 2015-10-20T04:57:00.000Z HLCEAVM 23:57,7,0,6,20,24,6276,6760,19,21,1803,2007,21,22,1312,1536,19,21,1142,1268,20,20,1800,1978,21,25,1682,2640,21,24,1965,2793,26,26,1864,2408,17,21,783,873", :level=>:error}

But I see that event in the logstash.stdout file:
# grep '2015-10-20T04:57' -a logstash.stdout
"@timestamp" => "2015-10-20T04:57:00.000Z",

Checking the debug log for total DOY occurences:
# grep '"DOY"' -a logstash.stdout | wc -l
7974

I checked in ES and the dropped event was really dropped:
# curl -XGET 'http://localhost:9200/_search?q=@timestamp:"2015-10-20T04:57:00.000Z"'
{"took":13,"timed_out":false,"_shards":{"total":286,"successful":286,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}

So everything made it to logstash using logstash-forwarder but filebeat failed to deliver all of the events for some reason.

can you run filebeat with -d '*' to get debug output from filebeat? Just from you text it's really hard to tell what's going on.

You can also try to update filebeat logstas-input-beats plugin. have had rc1 released yesterday with quite some improvements.

Maybe you can provide some sample log-files (in private?) for testing?

I can absolutely provide some sample logfiles for testing. I've can send you ~50 files, my input,filter, and output configs. That should get you the same results I'm seeing from filebeat. I did run filebeat with the -d "*" option so I have that complete debug if you want me to send it. It is a little large to post.

Cleared all logs and cleared ES
Working with a single logfile with 145 lines:

From Filebeat-rc1
2015/11/09 20:21:55.639250 preprocess.go:91: DBG preprocessor forward
2015/11/09 20:21:55.639250 output.go:103: DBG output worker: publish 145 events
2015/11/09 20:21:55.702300 filebeat.go:133: DBG Events sent: 145
2015/11/09 20:21:55.703273 registrar.go:99: DBG Registrar: processing 145 events

Checking to see if ES has all 145 events:
# curl http://localhost:9200/_search?q=DOY:*
{"took":1,"timed_out":false,"_shards":{"total":6,"successful":6,"failed":0},"hits":{"total":9 ...

There should be 145 hits, not 9.
logstash.log is empty so no issue there as far as I can tell.

Checking to see if for some reason events timed out sending to ES logstash.stdout has:
grep -a "DOY" /var/log/logstash/logstash.stdout | wc -l
9

So filebeat really didn't send 136 events to logstash even though it is reporting it did send them. I dug through the filebeat debug log and still don't see any indication it is timing out sending events to logstash.

Tried logstash-forwarder with the same file:
2015/11/09 14:37:13.409790 harvest: "C:\logstashcrt\Output\2015-09-03.log" (offset snapshot:0)
2015/11/09 14:37:13.410789 All prospectors initialised with 0 states to persist
2015/11/09 14:37:13.411790 Loading client ssl certificate: C:\logstashcrt\Test2\logstash-forwarder.crt and C:\logstashcrt\Test2\logstash-forwarder.key
2015/11/09 14:37:13.744833 Setting trusted CA from file: C:\logstashcrt\Test2\logstash-forwarder.crt
2015/11/09 14:37:13.746835 Connecting to [10.170.8.124]:5000 (10.170.8.124)
2015/11/09 14:37:13.818937 Connected to 10.170.8.124
2015/11/09 14:37:18.757507 Registrar: processing 145 events

So it is reporting it sent 145 messages as just like filebeat did.
Next I checked ES:
# curl http://localhost:9200/_search?q=DOY:*
{"took":2,"timed_out":false,"_shards":{"total":16,"successful":16,"failed":0},"hits":{"total":145

That's more like it, there should be 145 hits and there are.

logstash.log shows an expected error due to the log file format:
{:timestamp=>"2015-11-09T14:37:18.465000-0600", :message=>"Failed parsing date from field", :field=>"Date", :value=>"2015-09-03 %{Hour}:%{Minute}", :exception=>"Invalid format: "2015-09-03 %{Hour}:%{Minute}" is malformed at "%{Hour}:%{Minute}"", :config_parsers=>"yyyy-MM-dd HH:mm", :config_locale=>"default=en_US", :level=>:warn}

Checking to see what logstash.stdout holds shows the expected result:
# grep -a "DOY" /var/log/logstash/logstash.stdout | wc -l
145

If there is anything else I can provide let me know. I'm going to go ahead and work around this one file at a time using a script to move the files I need with 5 sec. pauses in between each move operation. That should keep logstash-forwarder from choking the life out of ES. It's something along the lines of 172,000 events it tries to send in less than 20ms which kills what I have set up.

Just used nxlog to forward the events and it did all 7940 events I was testing within <50ms. Not a single event was dropped by logstash either.

EDIT: I'll still help troubleshoot the filebeat issue if I can but I'm going to stick with nxlog since it works. I'll be moving to production tomorrow with what I have setup already but I'll retain my test setup to help with filebeat troubleshooting.

@evileric77 Thanks a lot for helping us out here. We will keep looking into the issue and let you know as soon as it is resolved.

@evileric77 thanks for the help on silent event drops. Others stumbled on this one too. See filebeat issue. We've a fix ready for this issue.

Good to hear. Glad you guys can get it fixed.