Hi to all,
I am trying to install ELK stack on my qa environment. But after successful installation of Elasticsearch, Kibana, Logstash. And filebeat on client.
I am facing the below error on kiaban.
No matching indices found: No indices match pattern "filebeat-*"
Error: No matching indices found: No indices match pattern "filebeat-*"
KbnError@http://192.168.101.112/bundles/commons.bundle.js?v=16627:1:5400
IndexPatternMissingIndices@http://192.168.101.112/bundles/commons.bundle.js?v=16627:1:10919
request/<@http://192.168.101.112/bundles/commons.bundle.js?v=16627:1:688797
processQueue@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:132456
scheduleProcessQueue/<@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:133349
$digest@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:144239
$apply@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:147007
done@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:100015
completeRequest@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:104697
createHttpBackend/</xhr.onload@http://192.168.101.112/bundles/vendors.bundle.js?v=16627:58:105435
I am trying to resolve this but failing all time.
Please help me
Here are some details of ELK stack.
Elasticsearch-6.2.4
Logstash-6.2.4
Kiabana--6.2.4
Filebeat--6.2.4
In elasticsearch.yml
Change is:
network.host: localhost
In kibana.yml
Changes made are:
server.host: "localhost"
In logstash:
the input,filter,output changes in /etc/logstash/conf.d/
root@ubuntu-xenial:/# cat /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
root@ubuntu-xenial:/# cat /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
root@ubuntu-xenial:/# cat /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
And on client side filbeat.yml
changes are:
.
.
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/auth.log
#- c:\programdata\elasticsearch\logs\*
fields:
type: syslog
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
.
.
.
and
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.101.112:5044"]
bulk_max_size: 1024
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
ssl.certificate: "/etc/pki/tls/certs/logstash-forwarder.crt"
# Client Certificate Key
ssl.key: "/etc/pki/tls/private/logstash-forwarder.key"
#================================ Logging =====================================
Please help me to resolve this.
Thank you in advance.