Unable to get winlogbeat to send to logstash


(Sean) #1

I'm new to ELK stack. running version 5.3 for all components of ELK. Able to telnet successfully from winlogbeat client to ELK server on ports 5044 and 9200. Cannot get winlogbeat data to show up in logstash. Any help would be appreciated. Thanks

elasticsearch.yml

Memory -----------------------------------
bootstrap.memory_lock: true
Network -----------------------------------
network.host: 192.168.1.35
http.port: 9200
XPACK----------------------------------------

Security auditing

xpack.security.audit.enabled: true
xpack.security.audit.outputs: [ index, logfile ]
xpack.security.audit.index.settings:
index:
number_of_shards: 1
number_of_replicas: 1

logstash.conf

# Logstash Configuration File
#
input {
  beats {
    port => 5044
  }
}
#
# filter {
#
output {
  elasticsearch {
    hosts => ["http://192.168.1.35:9200"]
    action => "index"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    user => "logstash_user"
    password => "*********"
  }
  stdout { codec => rubydebug }
}

winlogbeat.yml

winlogbeat.event_logs:
  - name: Application
    ignore_older: 72h
  - name: Security
  - name: System
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.35:5044"]
  index: winlogbeat

winlogbeat ERROR: "Failed to publish events caused by: read tcp 192.168.1.251:60576->192.168.1.35:5044: i/o timeout"


(Andrew Kroh) #2

Does it work if you comment out the elasticsearch output in Logstash (i.e. test Beats and Logstash in isolation)?


(Sean) #3

Yes, I commented out Logstash output lines and uncommented Elasticsearch lines and also stopped logstash service. Not sure I mentioned that I am running off of Windows 2012 R2 o/s and client is Win7.


(Andrew Kroh) #4

So you tested Beats direct to Elasticsearch and that worked?

Can you test Beats to Logstash where Logstash only outputs to stdout (remove the ES output)? If that works then it's probably an issue with the ES output in Logstash (like a credential/role issues maybe).


(Sean) #5

Yes beats to elasticsearch worked and i believe i was able to get beats to logstash stdout to work as well but will doublecheck when i get back in office. Currently on travel today. Thanks for the help.


(Sean) #6

Not sure I ran this correctly but...

I updated winlogbeat.yml to output to logstash instead of elasticsearch

Question: <<< Should the winlogbeat.yml file contain logstash username/pwd? ..And if so should it be logstash_user based upon the x-pack security lab guide?

output.logstash:
hosts: ["192.168.1.35:5044"]
index: winlogbeat

Next updated logstash config to:

input {
beats {
port => 5044
}
}
output {
stdput { codec => rubydebug }
}

then from Admin Powershell I ran command: .\bin\logstash -f logstash.conf, which produced...

C:\ELK\logstash-5.3.0> .\bin\logstash -f logstash.conf
Could not find log4j2 configuration at path /ELK/logstash-5.3.0/config/log4j2.properties. Using default config which logs to console
09:51:38.223 [[.monitoring-logstash]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
09:51:38.238 [[.monitoring-logstash]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}
log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
09:51:40.542 [[.monitoring-logstash]-pipeline-manager] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0xad33314 URL:http://logstash_system:xxxxxx@localhost:
9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhos
t:9200/][Manticore::SocketException] Connection refused: connect"}
09:51:40.557 [[.monitoring-logstash]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {
:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x67e453bb URL:http://localhost:9200>]}
09:51:40.557 [[.monitoring-logstash]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
09:51:40.557 [[.monitoring-logstash]-pipeline-manager] INFO logstash.pipeline - Pipeline .monitoring-logstash started
09:51:40.573 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>8,
"pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
09:51:41.403 [[main]-pipeline-manager] INFO logstash.inputs.beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
09:51:41.528 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
09:51:41.669 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
09:51:45.601 [Ruby-0-Thread-5: C:/ELK/logstash-5.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.2.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:222] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}
09:51:47.640 [Ruby-0-Thread-5: C:/ELK/logstash-5.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.2.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:222] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x64655cee URL:http://logstash_system:xxxxxx@localhost:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::E
lasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused: connect"}
09:51:48.437 [[main]<beats] ERROR logstash.pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Beats port=>5044, id=>"6f164141d0488801eb88cf02c3ad2332d4a9697d-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_6c4a974d-e114-4f30-99fb-a0cd8c0e4b8f", enable_metric=>true, charset=>"UTF-8">,
host=>"0.0.0.0", ssl=>false, ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, congestion_threshold=>5, target_field_for_codec=>"message", tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_E
CDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60> Error: Address already in use: bind


(Andrew Kroh) #7

Is another instance of Logstash already running? Something is using port 5044.


(Sean) #8

You were correct...even thou the service was stopped and I had killed the powershell command I ran it still had open sessions when I ran netstat -ano, so I rebooted. Once the server came back up I made sure the logstash service was disabled and then ran the pwrshell command again. To summarize it seemed to start processing winevents from the client system, but did still have this same error from above.

[Ruby-0-Thread-5: C:/ELK/logstash-5.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.2.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:222] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}09:51:47.640 [Ruby-0-Thread-5: C:/ELK/logstash-5.3.0/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.2.6-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:222] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused: connect"}


(Andrew Kroh) #9

That must be for the X-Pack monitoring feature of Logstash because you don't even have an elasticsearch output in your config file. So I think it is safe to ignore those for the moment. You can come back to that after you get the data path working.

So now that you are getting events into Logstash you can try enabling the elasticsearch output to get the Winlogbeat data into ES.


(Sean) #10

I will change output to and then run command again

output {
  elasticsearch {
    hosts =&gt; ["http://192.168.1.35:9200"]
    action =&gt; "index"
    manage_template =&gt; false
    index =&gt; "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type =&gt; "%{[@metadata][type]}"
    user =&gt; "logstash_user"
    password =&gt; "*********"
  }
  stdout { codec =&gt; rubydebug }
}

(Sean) #11

I'm seeing the events in stdout still but not through Kibana.


(Andrew Kroh) #12

Is there anything in the debug log related to the ES output? How about in the Elasticsearch logs? Does this logstash_user have the appropriate roles to allow it to write to winlogbeat-*?


(Sean) #13

Within Kibana, I had configured the logstash_writer role with the logstash-* index, which if I am not mistaken should be winlogbeat-* with write, delete and create_index privileges. Once I made this change it seemed like my cluster came screaching to a halt, so from devtools I ran DELETE _all to clear everything out. Now I'm at the point of creating a new index, which when I enter "winlogbeat-*" to create the index Kibana doesn't find anything, so winlogbeat logs do not look like they are making it there.

Below is from the ES log...

[2017-04-24T10:52:57,034][INFO ][o.e.l.LicenseService ] [TbK7VUK] license [b3033b96-6eae-4a0f-b5c2-d679c132dc16] mode [gold] - valid
[2017-04-24T10:52:57,050][INFO ][o.e.g.GatewayService ] [TbK7VUK] recovered [949] indices into cluster_state
[2017-04-24T10:53:29,269][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [TbK7VUK] failed to put mappings on indices [[[.security_audit_log-2017.04.24/hPBp6lvaQyOOGdUZHrtwpQ]]], type [event]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
at org.elasticsearch.cluster.service.ClusterService.lambda$onTimeout$4(ClusterService.java:497) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:544) [elasticsearch-5.3.0.jar:5.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-04-24T10:53:41,519][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [TbK7VUK] failed to put mappings on indices [[[winlogbeat-2017.04.24/whgl8QHXQO6_bnFKLWJH5Q]]], type [wineventlog]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
at org.elasticsearch.cluster.service.ClusterService.lambda$onTimeout$4(ClusterService.java:497) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:544) [elasticsearch-5.3.0.jar:5.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-04-24T10:54:12,864][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [TbK7VUK] failed to put mappings on indices [[[winlogbeat-2017.04.24/whgl8QHXQO6_bnFKLWJH5Q]]], type [wineventlog]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
at org.elasticsearch.cluster.service.ClusterService.lambda$onTimeout$4(ClusterService.java:497) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:544) [elasticsearch-5.3.0.jar:5.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-04-24T10:55:15,552][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [TbK7VUK] failed to put mappings on indices [[[winlogbeat-2017.04.24/whgl8QHXQO6_bnFKLWJH5Q]]], type [wineventlog]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
at org.elasticsearch.cluster.service.ClusterService.lambda$onTimeout$4(ClusterService.java:497) ~[elasticsearch-5.3.0.jar:5.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:544) [elasticsearch-5.3.0.jar:5.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
[2017-04-24T10:55:46,881][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [TbK7VUK] failed to put mappings on indices [[[winlogbeat-2017.04.24/whgl8QHXQO6_bnFKLWJH5Q]]], type [wineventlog]


(Andrew Kroh) #14

When Winlogbeat is pointed directly to ES it needs these privs for winlogbeat-*: manage_index_templates, monitor, write and create_index. But since you are going to through Logstash you will be managing the installation of the index templates yourself so you can drop that privilege.

I think these two commands would be enough to create a user and role for writing to winlogbeat-*.

POST _xpack/security/role/winlogbeat_writer
{
  "cluster": ["monitor"],
  "indices": [
    {
      "names": [ "winlogbeat-*" ], 
      "privileges": ["write","create_index"]
    }
  ]
}
POST /_xpack/security/user/winlogbeat_user
{
  "password" : "changeme",
  "roles" : [ "winlogbeat_writer"],
  "full_name" : "Winlogbeat User"
}

Then you can update the Logstash config to use this user when outputting Winlogbeat data.


(Andrew Kroh) #15

When you ran DELETE _all you probably deleted more than you intended to. This would have deleted system indices like .kibana which hold Kibana's internal data. So you might be in a bad state now.

To summarize the setup steps, once you have ES, Kibana, and X-Pack installed/working:

  • I would add a new winlogbeat user and role (as described in my last post). (additional info here)
  • Manually install the index template for winlogbeat-*. The index template is provided as a file in the Winlogbeat download. You can install it from Windows using the command listed in the docs or you can copy it over to a Linux machine and use a curl command like curl -XPUT -u "user:password" http://es:9200/_template/winlogbeat -d@winlobeat.template.json. Use an account that has privs to install index templates.
  • Load the sample dashboard for Winlogbeat. Again, use a superuser account.
  • Configure Logstash with an elasticsearch output that uses the new winlogbeat user.
  • Lastly, start shipping Winlogbeat data to Logstash.

(Sean) #16

Andrew...Thank you very much for helping me out here. I had no problem getting winlogbeat to send to ES directly and view data using kibana, even after installing x-pack. I went back to install logstash after all of that and that seems to be where I ran into problems. I am trying to keep x-pack out of the scenario for now to KISS (Keep It Simple Stupid), so using Windows 2012 R2, I have installed ES as service, installed logstash as service and installed kibana as service. I am able to receive winlogbeat data from clients when shipping directly to elasticsearch in winlogbeat config, but unable to receive data from logstash...so minus the x-pack plugin and users you mention above my system should be exactly what you last mentioned. I think the problem is either in my config or the service I created using nssm (Non-Sucking Service Manager)...(see below)

input { 
  beats {
    port => 5044
    }
}
#
#filter {}
#
output {
  elasticsearch {
    hosts => ["192.168.1.35:9200"] }
}

--nssm service configuration--
Service Name: logstash
Path: C:\ELK\logstash-5.3.0\bin\logstash.bat
Startup Directory: C\ELK\logstash-5.3.0\bin
Arguments: -f .\logstash.conf
Dependencies: elasticsearch-service-x64


(Sean) #17

OK so I was expecting the data to showup on ES with the winlogbeat index, but just found that it is showing up as "logstash-*" index, but at least I'm getting the data now


(Andrew Kroh) #18

Crawl, walk, run... starting simple is a good approach. In the config given above you have not specified some important parameters for the elasticsearch output. Checkout this documentation that shows a barebones Logstash setup for Beats.

With the config you have, data is going to be written to logstash-YYYY.MM.DD indices because of the default index value for LS output.

If you run a command like curl http://elasticsearch:9200/_cat/indices?v (or just open that in a browser or use the kibana dev console) you will see what indices exist and how many documents they have. Probably you will see logstash-2017.04.XX indices when some non-zero docs.counts values.


(Andrew Kroh) #19

P.S. Make sure you manually installed the index template from Winlogbeat before streaming data to winlogbeat-*.


(Sean) #20

Will I need to manually install the index template if the index is already there being used from other clients sending data directly to ES using that index?