Hello all, I'll begin by listing the components and versions I've installed first; all of the below are installed on one FreeBSD 11 p8 box:
Elasticsearch 5.0.2
Logstash 5.0.2
Kibana 5.0.2
Winlogbeat 5.2.2 is installed on a Windows 7 laptop.
I'm looking at using the Elastic Stack for managing logs at my place of work and I have followed the documentation for sending Windows event logs to Logstash and Elastic search. I have manually loaded the template to ES as per the instructions and have configured the Logstash conf file to accept beats. I then loaded the sample Kibana dashboards but when I entered the winlogbeat-* index pattern, Kibana complained with the following error:
Discover: Trying to query 3570 shards, which is over the limit of 1000. This limit exists because querying many shards at the same time can make the job of the coordinating node very CPU and/or memory intensive.
How did I get so many shards? Did I need to manually design the index/shard settings? Perhaps naively I thought loading the template would take care of all that. My conf file settings are below:
winlogbeat.yml
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: Security
- name: System
name: LAPTOP01
output.logstash:
hosts: ["192.168.1.100:5044"]
logstash.conf
input {
beats {
port => 5044
}
file {
type => "syslog"
# path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
path => "/var/log/messages"
start_position => "beginning"
}
}
filter {
# An filter may change the regular expression used to match a record or a field,
# alter the value of parsed fields, add or remove fields, etc.
#
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} (%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}|%{GREEDYDATA:syslog_message})" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{@source_host}" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}" ]
replace => [ "@message", "%{syslog_message}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_message" ]
}
date {
match => [ "syslog_timestamp","MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
}
syslog_pri { }
}
}
output {
elasticsearch {
hosts => "192.168.1.100:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
elasticsearch.yml
cluster.name: esearch-cluster
node.name: node-1
path.data: /zdata/elasticsearch-db
path.logs: /zdata/elasticsearch-log
path.scripts: /usr/local/libexec/elasticsearch
network.host: 192.168.1.100
http.port: 9200
Thanks for any help.