I want to index in Elasticsearch all print jobs via Winlogbeat. So i installed on a Windows Server 2008 R2 Winlogbeat and configured to send via Logstash the events. I recieve events but not the one i want.
I have added: - name: Microsoft-Windows-PrintService/Operational in the configuration file from winlogbeat.
I'm interested in the event id: 307 especially.
How can I check what is not working or is there a fix?
> ###################### Winlogbeat Configuration Example ##########################
> # This file is an example configuration file highlighting only the most common
> # options. The winlogbeat.full.yml file from the same directory contains all the
> # supported options with more comments. You can use it as a reference.
> #
> # You can find the full configuration reference here:
> # https://www.elastic.co/guide/en/beats/winlogbeat/index.html
> #======================= Winlogbeat specific options ==========================
> # event_logs specifies a list of event logs to monitor as well as any
> # accompanying options. The YAML data type of event_logs is a list of
> # dictionaries.
> #
> # The supported keys are name (required), tags, fields, fields_under_root,
> # forwarded, ignore_older, level, event_id, provider, and include_xml. Please
> # visit the documentation for the complete details of each option.
> # https://go.es.io/WinlogbeatConfig
> winlogbeat.event_logs:
> - name: Application
> ignore_older: 72h
> - name: Security
> - name: System
> - name: Microsoft-Windows-PrintService/Operational
> #================================ General =====================================
> # The name of the shipper that publishes the network data. It can be used to group
> # all the transactions sent by a single shipper in the web interface.
> #name:
> # The tags of the shipper are included in their own field with each
> # transaction published.
> #tags: ["service-X", "web-tier"]
> # Optional fields that you can specify to add additional information to the
> # output.
> #fields:
> # env: staging
> #================================ Outputs =====================================
> # Configure what outputs to use when sending the data collected by the beat.
> # Multiple outputs may be used.
> #-------------------------- Elasticsearch output ------------------------------
> #output.elasticsearch:
> # Array of hosts to connect to.
> #hosts: ["localhost:9200"]
> # Optional protocol and basic auth credentials.
> #protocol: "https"
> #username: "elastic"
> #password: "changeme"
> #----------------------------- Logstash output --------------------------------
> output.logstash:
> # The Logstash hosts
> hosts: ["172.17.XXX.XX:5044"]
> # Optional SSL. By default is off.
> # List of root certificates for HTTPS server verifications
> #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
> # Certificate for SSL client authentication
> #ssl.certificate: "/etc/pki/client/cert.pem"
> # Client Certificate Key
> #ssl.key: "/etc/pki/client/cert.key"
> #================================ Logging =====================================
> # Sets log level. The default log level is info.
> # Available log levels are: critical, error, warning, info, debug
> logging.level: info
> # At debug level, you can selectively enable logging only for some components.
> # To enable all selectors use ["*"]. Examples of other selectors are "beat",
> # "publish", "service".
> #logging.selectors: ["*"]
Winlogbeat Version: 5.3.0
Where could I see errors in Winlogbeat?
I recieve successfully other events from this server in Elasticsearch.
> 2017-05-17T18:33:32+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=1458 libbeat.logstash.published_and_acked_events=1 libbeat.publisher.published_events=1 msg_file_cache.SecurityHits=1 published_events.Security=1 published_events.total=1 uptime={"server_time":"2017-05-17T16:33:32.4590229Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h46m31.3755516s","uptime_ms":"6391375551"}
> 2017-05-17T18:33:33+02:00 INFO EventLog[Microsoft-Windows-PrintService/Operational] Successfully published 5 events
> 2017-05-17T18:33:53+02:00 ERR Failed to publish events caused by: write tcp 172.17.208.42:55412->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:33:53+02:00 INFO Error publishing events (retrying): write tcp 172.17.208.42:55412->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:33:54+02:00 INFO EventLog[Security] Successfully published 1 events
> 2017-05-17T18:34:02+02:00 INFO EventLog[Security] Successfully published 7 events
> 2017-05-17T18:34:02+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=4 libbeat.logstash.publish.read_bytes=24 libbeat.logstash.publish.write_bytes=4055 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=13 libbeat.logstash.published_but_not_acked_events=1 libbeat.publisher.published_events=13 msg_file_cache.Microsoft-Windows-PrintService/OperationalHits=4 msg_file_cache.Microsoft-Windows-PrintService/OperationalMisses=1 msg_file_cache.Microsoft-Windows-PrintService/OperationalSize=1 msg_file_cache.SecurityHits=8 msg_file_cache.SystemSize=-1 published_events.Microsoft-Windows-PrintService/Operational=5 published_events.Security=8 published_events.total=13 uptime={"server_time":"2017-05-17T16:34:02.683447Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h47m1.5999757s","uptime_ms":"6421599975"}
> 2017-05-17T18:34:32+02:00 INFO Non-zero metrics in the last 30s: uptime={"server_time":"2017-05-17T16:34:32.908855Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h47m31.8253837s","uptime_ms":"6451825383"}
> 2017-05-17T18:35:03+02:00 INFO Non-zero metrics in the last 30s: uptime={"server_time":"2017-05-17T16:35:03.0388389Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h48m1.9553676s","uptime_ms":"6481955367"}
> 2017-05-17T18:35:24+02:00 ERR Failed to publish events caused by: write tcp 172.17.208.42:55728->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:35:24+02:00 INFO Error publishing events (retrying): write tcp 172.17.208.42:55728->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:35:25+02:00 INFO EventLog[Security] Successfully published 1 events
> 2017-05-17T18:35:33+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=1464 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=1 libbeat.logstash.published_but_not_acked_events=1 libbeat.publisher.published_events=1 msg_file_cache.SecurityHits=1 published_events.Security=1 published_events.total=1 uptime={"server_time":"2017-05-17T16:35:33.1198629Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h48m32.0363916s","uptime_ms":"6512036391"}
> 2017-05-17T18:36:03+02:00 INFO Non-zero metrics in the last 30s: msg_file_cache.Microsoft-Windows-PrintService/OperationalSize=-1 uptime={"server_time":"2017-05-17T16:36:03.2008869Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h49m2.1174156s","uptime_ms":"6542117415"}
> 2017-05-17T18:36:33+02:00 INFO Non-zero metrics in the last 30s: uptime={"server_time":"2017-05-17T16:36:33.2819109Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h49m32.1984396s","uptime_ms":"6572198439"}
> 2017-05-17T18:36:54+02:00 ERR Failed to publish events caused by: write tcp 172.17.208.42:56009->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:36:54+02:00 INFO Error publishing events (retrying): write tcp 172.17.208.42:56009->172.17.209.70:5044: wsasend: An existing connection was forcibly closed by the remote host.
> 2017-05-17T18:36:55+02:00 INFO EventLog[Security] Successfully published 1 events
> 2017-05-17T18:37:03+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=710 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=1 libbeat.logstash.published_but_not_acked_events=1 libbeat.publisher.published_events=1 msg_file_cache.SecurityHits=1 published_events.Security=1 published_events.total=1 uptime={"server_time":"2017-05-17T16:37:03.3629349Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h50m2.2794636s","uptime_ms":"6602279463"}
> 2017-05-17T18:37:26+02:00 INFO EventLog[Security] Successfully published 1 events
> 2017-05-17T18:37:33+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_bytes=6 libbeat.logstash.publish.write_bytes=1537 libbeat.logstash.published_and_acked_events=1 libbeat.publisher.published_events=1 msg_file_cache.SecurityHits=1 published_events.Security=1 published_events.total=1 uptime={"server_time":"2017-05-17T16:37:33.4439589Z","start_time":"2017-05-17T14:47:01.0834713Z","uptime":"1h50m32.3604876s","uptime_ms":"6632360487"}
> 2017-05-17T18:37:37+02:00 INFO EventLog[Security] Successfully published 2 events
> 2017-05-17T18:37:40+02:00 INFO EventLog[Security] Successfully published 1 events
> 2017-05-17T18:37:48+02:00 INFO EventLog[Security] Successfully published 1 events
I guess your PowerShell is too old for those commands. Basically I wanted to see how many events were in the log and whether or not it was enabled.
But based on the winlogbeat logs, I'd say there a problem publishing to Logstash. So I would check to see if there is problem in the LS logs. What version of LS?
Hmm as far as i understand it. We already have a field user so it can't create the same field again in Elasticsearch?
Do you know how I can fix that? I have to that probably in Logstash?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.