Since update 6, Filebeat didn't send anything to logstash

Hi there,

I recently updated Elastic Search, Kibana and Logstash to version 6.2.4. Since then everything went fine. But when I also updated Filebeat to 6.2.4, no more input was received by it.
Then I went back to Filebeat version 5 (easily using Docker), and the next input arrived.
The following day, the first index was created using version 6 - now, Filebeat 5 isn't allowed to send data any more:
"Failed to parse mapping [default]: [include_in_all] is not allowed for indices createn or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."

I don't know, what to do. Version 6 doesn't send, but also doesn't output any errors at elastic/logstash/filebeat, Version 5 doesn't work any more.
Any ideas on that?

here is my config:
logstash:

input {
    beats {
      host => "0.0.0.0"
      port => 54322
    }
}
filter {[...]}
output {
    if [type] == "cowrie" {
        elasticsearch {
           hosts => ["localhost:9200"]
        }
    }
}

filebeat:

filebeat:
  prospectors:
    -
      paths:
        - /cowrie/log/cowrie.json*
      encoding: plain
      input_type: log
      document_type: cowrie
  registry_file: /registry/registryfile
output:
  logstash:
    hosts: ["elastic:54322"]
    worker: 1
shipper:
logging:
  to_syslog: true
  level: info

Please format logs, configs and terminal input/output using the </>-Button or markdown code fences. This forum uses Markdown to format posts. Without proper formatting, it can be very hard to read your posts. Proper formatting helps us to help you.

Config files using YAML are sensitive to formatting and indentation. Without proper formatting it is difficult to spot any errors in your configs.

In filebeat, index names are versioned. This way documents can be indexed, if the document mappings between versions would be incompatible.

I guess the Failed to parse mapping is another unrelated error of Elasticsearch complaining about your index mappings or mapping templates. If indexing fails, you will see some other mapping failures in your logs (in Logstash or Elasticsearch). This is because events in filebeat 6 are not fully backwards compatible to filebeat 5.<whatever exact version you are using>.

You don't seem the configure a proper, versioned index name in the elasticsearch output in logstash. The beats documentation ask you to configure the elasticsearch output like this:

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
  }
}

With this configuration you would create a versioned daily index with the correct beat name. E.g. filebeat-6.2.1-2018.04.20.

According to the elasticsearch output documentation, the index setting defaults to logstash-%{+YYYY.MM.dd}. The default index name is not versioned. This default setting creates a new index every day. Each index has it's own mapping. If you don't have an index template setup or if fields are missing from the mapping, Elasticsearch will automatically deduce a mapping.

As the documents of filebeat 5.x and 6.x are not fully compatible, you will have to either introduce versioned indices, or wait for the next day, giving Elasticsearch a chance to create a new index with a new mapping. That's why it started working the day after. In case you don't want to change the index names you use, your best bet is to stop filebeat 5.x before midnight and start filebeat 6.x after midnight (given you don't extract timestamps from your logs via logstash).

Attached is a screenshot of the last elasticsearch indexes.
I think there is a little misunderstanding about my problem:
Filebeat isn't working since the day, when Elastic Search created the first version-6-index. Since then I get the messages, when I'm using filebeat 5.6.
What I don't understand is: Why doesn't filebeat 6 send any data to logstash?

You can not index data from filebeat 5.x into an index created for filebeat 6.x. You can not index data from filebeat 6.x into an index created for filebeat 5.x.

How can you tell filebeat is not forwarding to Logstash? Have you checked filebeat logs for errors? Filebeat just forwards events as is to Logstash, it is logstash finally writing the event to Elasticsearch. The document mapping (compatiblity of documents) is enforced in Elasticsearch.

You get these messages in Filebeat, Logstash or Elasticsearch? Given you configuration, the error message you posted should only come up in Logstash and/or Elasticsearch.

I just found out, that my index template still contained the "_all" field.. Thought, that this would be updated automatically.
After deleting this field / disabling it, it now works as intended.
Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.