Trouble sending Winlogbeat data to Logstash

After updating to Winlogbeat 5 Alpha 2 and finally updating the template on my ELK stack I have ran Winlogbeats using the ./winlogbeat -e -d "publish" command.

Whilst I see a list of scrolling events I also see

sync.go:94: ERR Failed to publish events caused by: EOF

...and no data in Kibana! I have deleted on old indices and refreshed the winlogbeat index, restarted ES for fun but still nothing!

UPDATE: I can see data in Kibana but it's using my default Logstash index and not Winlogbeat. I therefore removed that from Kibana to test further and now trying to recreate.

Further still. I have looked at my winlogbeat index from the Kibana console and removed it. Then added the v5 alpha 2 template but I am not able to create a new winlogbeat index with appriopate fields.

All info appreciated.

Thanks

What version of Elasticsearch are you using? The winlogbeat.template.json is for ES 5.x and the winlogbeat.template-es2x.json is for ES 2.x. Make sure you install the correct template for your ES version.

So your setup is Winlogbeat -> Logstash -> Elasticsearch? Can you share the configuration you are using for Winlobeat and Logstash. If your Winlogbeat data is going into a logstash-* index then you should check the elasticsearch output options in your Logstash configuration to make sure you specify the index. Here's an example that only publishes events that come from Beats to the given elasticsearch output. It uses the metadata added by Beats to control the destination index and type.

input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][beat] {
    elasticsearch {
      hosts => "localhost:9200"
      manage_template => false
      index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
    } 
  }
}

Ha! ES version 2.2.1 so have just loaded correct template. Have restarted winlogbeat clients. Still nothing in Kibana though. Yes Winlogbeat -> Logstash -> Elasticsearch

I have split inputs/outputs into 2 separate conf files held within the logstash conf directory

 input {
   beats {
     port => 5044
     ssl => true
     ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
     ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
 }

 output {
   elasticsearch {
     hosts => ["localhost:9200"]
     sniffing => true
     manage_template => true
     index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
     document_type => "%{[@metadata][type]}"
   }
 }

Are there any errors in the Logstash or Beats logs?

When first setting up Logstash I usually run it in the foreground so I can quickly see the events and any errors.

output {
  stdout { codec => rubydebug { metadata => true } }

  elasticsearch...
}

Then start Logstash in the foreground with:

/opt/logstash/bin/logstash -f /etc/logstash/conf.d

Probably not expected but ...

Error: Expected one of #, { at line 50, column 16 (byte 1042) after output {
stdout { codec => rubydebug { metadata => true } }

elasticsearch {:level=>:error}

Did you expand the "elasticsearch..." to config you were using for the elasticsearch output?

Yes but one too many {. I'm using vi and the human eye for edits. Clearly not good enough. However with that fixed. I am seeing the full input (json?) and getting many lines of

Beats input: the pipeline is blocked, temporary refusing new connection.

After a reboot of the whole thing including the server it looks like this

An unexpected error occurred! {:error=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :class=>"Errno::EADDRINUSE", :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:118:in `initialize'", "org/jruby/RubyIO.java:853:in `new'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/lumberjack/beats/server.rb:51:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-2.2.7/lib/logstash/inputs/beats.rb:119:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:322:in `start_inputs'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:321:in `start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:172:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:126:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/agent.rb:210:in `execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/runner.rb:90:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/runner.rb:95:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24:in `initialize'"], :level=>:warn}

With such a simple Logstash config this probably means there's an issue sending data to Elasticsearch. Any Logstash guru's please chime in. Is Elasticsearch running and healthy? If you comment out the ES output, does it receive events OK?

Do you already have Logstash running? This means something is already listening on the port.

Thanks Andrew, I'm out of time on this but my colleague will hopefully pick it up. Thanks for your help.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.