Logstash/ Elastic index date and time match error

Hello Everyone, this is my first post here so I'll try to provide good specifics.

First off, I'm trying to make an ELK stack with Filebeat all hosted on the same VM. It's just a lab environment.
Kibana, Elastic, Logstash, and Filebeat are all communicating with each other on their respective ports. I am getting this error from Logstash:

Dec 11 12:33:42 logger logstash[10390]: [2020-12-11T12:33:42,162][WARN ][logstash.outputs.elasticsearch][main][9d4a249cf7f2baff646eaf522341179928239f5e000572e7e971e0f260a83ac0] Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.10.1-2020.12.11", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x4295f476>], :response=>{"index"=>{"_index"=>"filebeat-7.10.1-2020.12.11", "_type"=>"_doc", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index [filebeat-7.10.1-2020.12.11] and [action.auto_create_index] ([.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*]) doesn't match", "index_uuid"=>"_na_", "index"=>"filebeat-7.10.1-2020.12.11"}}}}

So far I've tried handling it by making a processor configuration in Filebeat.yml
This is the top portion of my filbert.yml

*Note, I have since commented out the config at the bottom about the .ilm. policy. I had used that to set up filebeat initially when it's output was elastic search.

This is the output from Elasticsearch concerning my indexes it has stored locally.

This is my Logstash pipline.conf:

input {
  beats {
    port => 5044

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => ""
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      pipeline => "%{[@metadata][pipeline]}"
      #user => "elastic"
      #password => "secret"
  } else {
    elasticsearch {
      hosts => ""
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#output {
#  elasticsearch {
#    hosts => [""]
"/etc/logstash/conf.d/logstash-filebeat.conf" [readonly] 35L, 859C

My main question is:
Is there a filter configuration or a processor I can put in to get my output from filebeat, through Logstash to actually match the index on Elastic?

Any help would be appreciated. I apologize in advance as I am very new to ELK.

This is really an elasticsearch configuration question. You are trying to send data to an index called filebeat-7.10.1-2020.12.11. By default elasticsearch will create an index when it is written to (documented here). However, your elasticsearch has been configured to only allow the automatic creation of indexes with names that match one of these patterns

  • .monitoring*
  • .watches
  • .triggered_watches
  • .watcher-history*
  • .ml*

You probably want action.auto_create_index set to true, which is the default. I have no suggestions on how you could have ended up with a non-default value. It can be set in elasticsearch.yml.

First off, thank you so much for your prompt reply.

I had an old non-default config in my elastic.yml back when I was trying to get elastic working and though there was an issue with it. Apparently I did not fully understand what it did.

I set that action.auto_create_index: true
and this has solved my question I believe.

I was definitely led down a bit of Rabbit hole for a while there as when I launched be the pipeline yesterday it was actually working until I enable a net flow module, and that was around the time my timezone vs UTC rolled the clock over. This triggered a new date creation that was incompatible with the previous index. But I thought the problem was further up the pipeline than elastic.

Thanks again!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.