"ERR failed to initialize logstash plugin... key file not configured"

I've successfully setup logstash-forwarder without much issue on windows, but when I try to setup filebeat I get this key error. I've set the key according to:
https://www.elastic.co/guide/en/beats/filebeat/current/_logtash_forwarder_to_filebeat_migration.html

Exact Error:
outputs.go:104: ERR failed to initialize logstash plugin as output: key file not configured
beat.go:97: CRIT key file not configured

I've also tried to add "certificate-key:" as well. Same issue. Also, the supplied "filebeat.yml" in the zip package has an invalid control character at Line: 1, Column: 1. You can configure that file until the cows come home but it won't work properly.
This is the error using that file produces:
YAML config parsing failed on ./filebeat.yml: yaml: control characters are not allowed. Exiting

I hate YAML so much you don't even know... Give me curlies any day...

This is my config.

filebeat:
  prospectors:
    -
      paths:
        - c:\logstash\Output\*.log
      type: log
      fields:
        sourcename: vdidaily
      ignore_older: 24h
  registry_file: C:\ProgramData\Filebeat\registry
output:
  logstash:
    enabled: true
    hosts:
      - 10.170.8.124:5000
    index: logstash
    tls:
      certificate: c:\logstashcrt\Test2\logstash-forwarder.crt
      certificate-ssl: c:\logstashcrt\Test2\logstash-forwarder.key
      certificate-authorities:
        - c:\logstashcrt\Test2\logstash-forwarder.crt
      timeout: 40

@evileric77 For the YAML issue: I just found the problem and it is an issue in our build process. Even though registry_file: C:\ProgramData\Filebeat\registry looks like a valid line, the F is a special character. Can you remove the F and \ and type it yourself again?

@evileric77 Thanks for reporting this. There is an issue with the LSF migration guide. It lists the wrong TLS configuration options. It will be corrected soon.

The changes are:

  • certificate-ssl -> certificate_key
  • certificate-authorities -> certificate_authorities

https://github.com/elastic/filebeat/pull/147/files#diff-29177ec7016c5e1a7a690b59b167deb8R192

@ruflin I forgot to mention that special character in my post. I did find that as well and my config is a 'from scratch' write-up.

@andrewkroh When I originally wrote the config lines for TLS I wrote certificate_key and certificate_authorities. After writing that based on the comments in the sample filebeat.yml failed to work and I decided to look online and write a file up from scratch. Had I simply copied my config over I would not have had any issues with the config file. I'm glad I did since it forced me to post and bring attention to a few things lol. I normally just figure it out on my own and don't post.

That being said the changes for tls options worked to form a usable yml, however:
service_windows.go:45 ERR Error: The service process could not connect to the service controller

I'm running it in elevation and it is not shipping any logs.

The "special char in the config should be fixed with this pull request: https://github.com/elastic/filebeat/pull/150

As mentioned here, the ERR windows output can be ignored as long as filebeat is running int he foreground. But we must fix here the error message.

Can you enabled "debugging" on the filebeat side with -d "*" to check what the output is?

Well, it is definitely running but doesn't seem to read the host I gave it.

BG Initializing output plugins
WARN Couldn't load GeoIP database
DBG create output worker: 0x0, 0x0
WARN No output is defined to store the topology. The server fields might not be filled.
INFO No shipper name configured, using hostname 'HLCEAVM'
DBG create bulk processing worker (interval=1s, bulk size=10000)
DBG Init filebeat
DBG Registry file set to: C:\ProgramData\Filebeat\registry
INFO Loading registrar data from C:\Users\REDACTED\Downloads\filebeat-1.0.0-beta4-windows/C:\ProgramData\Filebeat\registry
ERR Error: The service process could not connect to the service controller.
DBG Set idleTimeoutDuration to 5s

No events get sent to the logstash server

DBG Start harvesting unkown file: c:\logstash\Output\2015-10-17.log
DBG Skipping file (older than ignore older of 24h0m0s): c:\logstash\Output\2015-10-17.log
DBG Check file for harvesting: c:\logstash\Output\2015-10-18.log
DBG Start harvesting unkown file: c:\logstash\Output\2015-10-18.log
DBG Skipping file (older than ignore older of 24h0m0s): c:\logstash\Output\2015-10-18.log
DBG Check file for harvesting: c:\logstash\Output\2015-10-19.log
DBG Start harvesting unkown file: c:\logstash\Output\2015-10-19.log
DBG Skipping file (older than ignore older of 24h0m0s): c:\logstash\Output\2015-10-19.log
DBG Check file for harvesting: c:\logstash\Output\2015-10-20.log
DBG Start harvesting unkown file: c:\logstash\Output\2015-10-20.log
DBG Launching harvester on new file: c:\logstash\Output\2015-10-20.log
DBG Check file for harvesting: c:\logstash\Output\2015-10-26.log
DBG Start harvesting unkown file: c:\logstash\Output\2015-10-26.log
DBG harvest: "c:\logstash\Output\2015-10-20.log" (offset snapshot:0)
DBG Launching harvester on new file: c:\logstash\Output\2015-10-26.log
DBG scan path c:\logstash\Output*.log
DBG harvest: "c:\logstash\Output\2015-10-26.log" (offset snapshot:0)
INFO All prospectors initialised with 0 states to persist
DBG Send events to output
DBG send event
DBG preprocessor
DBG preprocessor forward
DBG output worker: publish 253 events
DBG preprocessor
DBG preprocessor forward
DBG output worker: publish 253 events
DBG preprocessor
DBG preprocessor forward
DBG output worker: publish 253 events
DBG output worker: no events to publish
DBG preprocessor
DBG preprocessor forward
DBG output worker: publish 253 events

I tried
hosts: ["10.170.8.124:5000"]
and
hosts:
10.170.8.124:5000

I can start a new topic if you want since this is quite a side track from the original issue.

@evileric77 I think we can keep it in this topic, as this could still be related to a key issue.

What is the content of your registrar file (C:\ProgramData\Filebeat\registry) after running the above?

Host configuration looks good. I prefer the first one.

@ruflin

Sounds good to me. My C:\ProgramData\filebeat\registry file contents:

{}

That's it. lol

I guess it should be noted that I let filebeat run and run for about an hour before stopping it. Resulting log file was huge due to the debug option being set but it was just cycling the check/harvest/send/preprocessor messages over and over.

@evileric77 Looking into this case I realised that we potentially ignore some important log messages when the output is sent which could help us here with finding the issue: https://github.com/elastic/libbeat/pull/229

Did you try if fillebeat works on your system to send the data to an "insecure" es instance?

I have not tried that but I will tomorrow. I am testing this at work and I have a pretty significant project I need to finish by tomorrow morning. I'll try tomorrow and let you know what happens and what I come with when it comes to testing insecure.

@evileric77 There is also now a new nightly available with better logging: https://beats-nightlies.s3.amazonaws.com/index.html?prefix=filebeat/ You could also try this if it is ok for you to use the nightly build.

Seems to be the logstash plugin. I get the same Warning and exact same results if I remove the TLS options:

DBG Initializing output plugins
WARN Couldn't load GeoIP database
DBG create output worker: 0x0, 0x0
WARN No output is defined to store the topology. The server fields might not be filled.
INFO No shipper name configured, using hostname 'HLCEAVM'
DBG create bulk processing worker (interval=1s, bulk size=10000)
DBG Init filebeat
DBG Registry file set to: C:\ProgramData\Filebeat\registry
INFO Loading registrar data from C:\Users\REDACTED\Downloads\filebeat-1.0.0-beta4-windows/C:\ProgramData\Filebeat\registry
DBG Set idleTimeoutDuration to 5s
ERR Error: The service process could not connect to the service controller.

I'll try directly to ES unsecure next though that will require me to change the way I have ES setup to begin with.

I see no warnings related to logstash plugin. The warning about topology you can ignore, this doesn't hinder logstash from functioning.

I tried the nightly since you said it has better error message logging and reduced it to a single log file.

... DBG Publish: { "@metadata": { "beat": "logstash", "type": "log" }, "@timestamp": "2015-11-02T21:29:43.777Z", "count": 1, "fields": { "sourcename": "vdidaily" }, "fileinfo": {}, "input_type": "", "line": 145, "message": "23:57,3,0,0,24,29,577,966,18,22,465,545,18,24,529,619,21,25,681,1228,0,3,107,37,21,26,465,837,17,22,431,526,23,25,806,903,19,23,464,1004", "offset": 21433, "shipper": "HLCEAVM", "source": "c:\\logstashcrt\\Output\\2015-10-04.log", "type": "log" } DBG preprocessor forward DBG output worker: publish 145 events INFO Error publishing events (retrying): EOF INFO Error publishing events (retrying): EOF INFO Error publishing events (retrying): EOF DBG preprocessor

Then it loops again with the same log file and same messages (all of which are correct). Contents of the file is indeed 145 lines. Logstash-forwarder misses events in these log files occasionally when I use the VM to backfill the log files (like i'm doing now). I was hoping that Filebeat would prevent this from happening since I dont' really want to backfill 1+ years of log files one at a time.

I don't fully understand what you're doing or what it is you try to achieve.

But both logstash-forwarder and filebeat try to guarantee at least once delivery. That is if the publisher fails to send log lines, it has to retry.

So you basically want to drop messages of Logstash can not process them? Is it you don't want to process already available log files/lines, but start processing only new lines?

The way the log file is generated is by a script that appends the file every 10mins. throughout the day. Once the day is over a new log file is generated. I'm new to using ELK and no one here has ever used it before either. I would like to pull ALL of the previously generated logs into logstash so I can leverage Kibana dashboards for the data that I do have. That is what I mean by "backfill". I'm not trying to drop anything. You can ignore the fact I'm backfilling logs as it is irrelevant to the issue with Filebeat.

My problem with filebeat is it fails to send ANY logs to logstash.
My problem with logstash-forwarder is it fails to send complete logs when there are many log files that fit the glob all at once. This is 400+ logs with 145 lines each. That's over 50,000 messages it has to send. I've done stdout { codec => rubydebug } and dug through the logstash.stdout where I don't see the missing messages (which is why I think they didn't get sent by the forwarder) checked logstash.log and see no problems, and checked logstash.err (empty file).

Maybe I'm missing something but I feel like logstash-forwarder.exe is unreliable and wanted to try filebeat since it will be replacing it anyway. I'm not sure what I've said previously that made you think I was trying to drop messages since that is the exact opposite of what I'm trying to accomplish.

@evileric77 You write above, that the 145 lines were read as expected. So it seems like filebeat works correctly as long as yo use the console output?

If the above is the case, lets focus on why the connection to logstash / elasticsearch doesn't work.

I'm sure we find a solution here.

Side note here: Today we fixed an encoding issue for windows. So in case you had encoding defined in your config, you were probably also affected by this.

You mean this encoding issue?

message=>"Lumberjack input: unhandled exception", :exception=> NoMethodError: undefined method `force_encoding' for 1:Fixnum>,

I just started seeing that in my logstash log while trying to use the 2015-11-02 filebeat nightly.

Huh, this error message is funny. What's you logstash config?

On unhandled exception the beats plugin shoud print: "Beats input: unhandled exception"

The beats plugin doesn't even use 'force_encoding': https://github.com/logstash-plugins/logstash-input-beats/blob/master/lib/logstash/inputs/beats.rb, but the lumberjack plugin does.

Input:
input {
lumberjack {
port => 5000
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

filter:

filter {
if [sourcename] == "vdidaily"{
grok {
match => { "file" => "C:\logstashcrt\Output\%{GREEDYDATA:DOY}.log" }
}
grok {
match => ["message" , ["%{HOUR:Hour}:%{MINUTE:Minute},%{INT:GlobalSessionCount:int},%{INT:LibrarySessionCount:int},%{INT:ACCeleratorSessionCount:int},%{INT:HLCVDI01_cpu_usage_average:int},%{INT:HLCVDI01_mem_usage_average:int},%{INT:HLCVDI01_disk_usage_average:int},%{INT:HLCVDI01_net_usage_average:int},%{INT:HLCVDI02_cpu_usage_average:int},%{INT:HLCVDI02_mem_usage_average:int},%{INT:HLCVDI02_disk_usage_average:int},%{INT:HLCVDI02_net_usage_average:int},%{INT:HLCVDI03_cpu_usage_average:int},%{INT:HLCVDI03_mem_usage_average:int},%{INT:HLCVDI03_disk_usage_average:int},%{INT:HLCVDI03_net_usage_average:int},%{INT:HLCVDI04_cpu_usage_average:int},%{INT:HLCVDI04_mem_usage_average:int},%{INT:HLCVDI04_disk_usage_average:int},%{INT:HLCVDI04_net_usage_average:int},%{INT:HLCVDI05_cpu_usage_average:int},%{INT:HLCVDI05_mem_usage_average:int},%{INT:HLCVDI05_disk_usage_average:int},%{INT:HLCVDI05_net_usage_average:int},%{INT:HLCVDI06_cpu_usage_average:int},%{INT:HLCVDI06_mem_usage_average:int},%{INT:HLCVDI06_disk_usage_average:int},%{INT:HLCVDI06_net_usage_average:int},%{INT:HLCVDI07_cpu_usage_average:int},%{INT:HLCVDI07_mem_usage_average:int},%{INT:HLCVDI07_disk_usage_average:int},%{INT:HLCVDI07_net_usage_average:int},%{INT:HLCVDI08_cpu_usage_average:int},%{INT:HLCVDI08_mem_usage_average:int},%{INT:HLCVDI08_disk_usage_average:int},%{INT:HLCVDI08_net_usage_average:int},%{INT:HLCVDI09_cpu_usage_average:int},%{INT:HLCVDI09_mem_usage_average:int},%{INT:HLCVDI09_disk_usage_average:int},%{INT:HLCVDI09_net_usage_average:int}"]]
}

mutate {
add_field => [ "Date","%{DOY} %{Hour}:%{Minute}" ]
}
date {
match => [ "Date" , "yyyy-MM-dd HH:mm"]
target => "@timestamp"
}
}
}

output:
output {
elasticsearch { hosts => localhost }
}