I'm having an odd issue with logstash that I've never run into before.
I spun up a new 5.1 stack the other day and for some reason logstash doesn't seem to be creating an index by itself.
I'll get the error:
:response=>{"index"=>{"_index"=>"winlogbeat-2016.12.15", "_type"=>"wineventlog", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"winlogbeat-2016.12.15", "index_uuid"=>"na", "index"=>"winlogbeat-2016.12.15"}}}}
If I manually create the index winlogbeat-2016.12.15 or whatever day it is, it it will then proceed to stream data to ES and everything else works fine.
Does anyone have any thoughts on that?
For reference, my logstash conf is as follows:
input {
beats {
port => "5044"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
This seems OK as far as I can tell, so I'm not entirely sure where I should be looking.