I've noticed that mu Indexes aren't being added to Kibana anymore. I do have changed my Logstash conf files in the meantime. But still then I would only expect GROK parse failures and not my index to dissapear..
This is the config:
input {
udp {
port => 5514
type => syslog
}
tcp {
port => 5514
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Can you verify the indices exist in Elasticsearch? Do you have appropriate index patterns setup within Kibana that match those Elasticsearch indices? If the data is in Elasticsearch, Kibana should be able to see it, but if the data is not in Elasticsearch, you might have issues on the Logstash side
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.