Index stops accepting writes after applying filebeat index template

So I'm using AWS Elasticsearch (which is currently at version 6.0.1). I use filebeat (6.1.0) to send logs to logstash (6.2.0), which uses the https://github.com/awslabs/logstash-output-amazon_es output plugin to write to an IAM protected ES cluster.

When I have no index template, documents get written to ES fine, but I get all those silly ".keyword" fields. So I found the docs on manually setting an index template (https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html)

I get the index template filebeat exports using:
filebeat export template --es.version=6.0.1 > filebeat-6.1.0.template.json

(Here's a gist with that template.json https://gist.github.com/djcrabhat/ce74ca10d74748a657f8f5c45c4654f1)

And I PUT that index template to ES. After that, I delete the filebeat-* indexes so they get recreated with the new template. The new index gets created, but even though I turned on debug logging and see logstash flushing data to ES, the index just sits there with 0 documents

GET /_cat/indices?v
health status index                       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
[...]
yellow open   filebeat-6.1.0-2018.02.14   Zz8Jkf1XTyKxor-mL9fT-A   3   1          0            0       699b           699b

If I delete the index template, the logs start flowing to the index again
yellow open filebeat-6.1.0-2018.02.14 3aR0_cDHQIeYU6UR0CC0ww 5 1 6919 0 3.5mb 3.5mb

Any idea what's happening here? I'm testing this on a single node ES cluster while I take ELK for a spin.

What does you Elasticsearch output look like in the Logstash config? Are you be any chance setting an incorrect document type that clashes with your index template (Elasticsearch 6.x can only have 1 type per index)? Is there anything in the Elasticsearch logs?

That's kind of the tough part, since this is AWS hosted ES, I don't have access to the logs.

my logstash output looks like

input {
  beats {
    port => 5044
  }
}

filter {
  mutate {
    add_field => {
        "myfield" => "stuff"
    }
  }
  if [fileset][module] == "apache2" {
    if [fileset][name] == "access" {
      grok {
        match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?",
          "%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] }
        remove_field => "message"
      }
      mutate {
        add_field => { "read_timestamp" => "%{@timestamp}" }
      }
      date {
        match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
        remove_field => "[apache2][access][time]"
      }
      useragent {
        source => "[apache2][access][agent]"
        target => "[apache2][access][user_agent]"
        remove_field => "[apache2][access][agent]"
      }
      geoip {
        source => "[apache2][access][remote_ip]"
        target => "[apache2][access][geoip]"
      }
    }
    else if [fileset][name] == "error" {
      grok {
        match => { "message" => ["\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{LOGLEVEL:[apache2][error][level]}\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message]}",
          "\[%{APACHE_TIME:[apache2][error][timestamp]}\] \[%{DATA:[apache2][error][module]}:%{LOGLEVEL:[apache2][error][level]}\] \[pid %{NUMBER:[apache2][error][pid]}(:tid %{NUMBER:[apache2][error][tid]})?\]( \[client %{IPORHOST:[apache2][error][client]}\])? %{GREEDYDATA:[apache2][error][message1]}" ] }
        pattern_definitions => {
          "APACHE_TIME" => "%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}"
        }
        remove_field => "message"
      }
      mutate {
        rename => { "[apache2][error][message1]" => "[apache2][error][message]" }
      }
      date {
        match => [ "[apache2][error][timestamp]", "EEE MMM dd H:m:s YYYY", "EEE MMM dd H:m:s.SSSSSS YYYY" ]
        remove_field => "[apache2][error][timestamp]"
      }
    }
  }
}

output {
  amazon_es {
    hosts => ["MY-ES-CLUSTER.us-west-2.es.amazonaws.com"]
    region => "us-west-2"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

I think the default type used by the Elasticsearch output plugin might be logs. Check what is set in an index using the old template and then update other document_type in the plugin or the index template accordingly.

You're right, I see "logs" in the "_type" field in my working configuration. But I don't see anything in my template (https://gist.github.com/djcrabhat/ce74ca10d74748a657f8f5c45c4654f1) that would be messing with that or "doc" or anything.

I've opened up a support case with AWS to see if they can give me copies of the logs, as this would be much easier to debug if I saw what ES what logging.

{

  "index_patterns": [

    "filebeat-6.1.0-*"

  ],

  "mappings": {

    "doc": {

You seem to have doc set as type in your index template. You can see this described better in the docs.

Oh thank you thank you thank you! That was exactly my problem. I changed that property from "doc" to "logs" and it worked perfectly! Sorry it took me so long to understand what you were suggesting, it all makes much more sense now.

Maybe I'll submit a PR to https://github.com/elastic/beats/blob/6.2/libbeat/docs/shared-template-load.asciidoc or to https://github.com/awslabs/logstash-output-amazon_es to make this a little more explicit/be on the lookout for that logstash output setting that "logs" type.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.