Cannot get my template to work

I am trying to do a simple test to get ES to use a template of relying on default mapping to make sure the correct mappings are applied for each data type and not analyze all fields. It does not seem to apply the template. I dropped the index and template, reloaded using the template below but nothing changes. Any idea what I am doing wrong here? Regards, Frank.

config file output:

elasticsearch
{
manage_template => true
template => "c:/elasticsearch-1.6.0/config/templates/map-test/map-test.json"
host => "localhost"
index => "map-test"
workers => 1
document_type => "test1"
}

my template:

{
"template": "map-test",
"order" : 1,
"mappings" : {
"properties" : {
"test1" : {
"pk_col" : { "type": "string", "index": "not_analyzed"},
"dt_type" : { "type": "date", "format": "yyyy MM dd HH:mm:ss:SSS", "index": "not_analyzed" },
"int_type": { "type": "integer" ,"index": "not_analyzed"},
"float_type" : { "type" : "float", "index": "not_analyzed" },
"str_type_analyzed" : { "type" : "string", "index" : "analyzed" },
"str_type_not_analyzed" : { "type" : "string", "index" : "not_analyzed" }
}
}
}
}

It sees the templates per the log file:

{:timestamp=>"2015-08-10T13:57:22.609000-0500", :message=>"Automatic template management enabled", :manage_template=>"true", :level=>:info}
{:timestamp=>"2015-08-10T13:57:22.765000-0500", :message=>"Using mapping template", :template=>{"template"=>"map-test", "order"=>1, "mappings"=>{"properties"=>{"test1"=>{"pk_col"=>{"type"=>"string", "index"=>"not_analyzed"}, "dt_type"=>{"type"=>"date", "format"=>"yyyy MM dd HH:mm:ss:SSS", "index"=>"not_analyzed"}, "int_type"=>{"type"=>"integer", "index"=>"not_analyzed"}, "float_type"=>{"type"=>"float", "index"=>"not_analyzed"}, "str_type_analyzed"=>{"type"=>"string", "index"=>"analyzed"}, "str_type_not_analyzed"=>{"type"=>"string", "index"=>"not_analyzed"}}}}}, :level=>:info}

Without knowing more, my guess is that you are not overwriting the default template.

Your settings, as posted, will not overwrite the existing template, which is named logstash. See the contents of the template already in place: curl localhost:9200/_template/logstash?pretty

There are two ways you can address this:

  1. You can overwrite the existing template by adding template_overwrite => true to your elasticsearch output block
  2. You can use the template_name directive to give this particular template a name other than the default logstash
1 Like

When I use the template_overwrite => true I get error message below

:message=>"failed action with response of 400, dropping action: ["index", {:_id=>nil, :_index=>"map-test", :_type=>"test1", :_routing=>nil}, #<LogStash::Event:0x75d177eb @metadata_accessors=#<LogStash::Util::Accessors:0x5dccf42a @store={"retry_count"=>0}, @lut={}>, @cancelled=false, @data={....

One more thing... I want to provide a new template name. I don't want to overwrite/replace the logstash one.

The 400 indicates a permission failure. Do you have some kind of security layer (Shield or otherwise)?

If you don't want to overwrite the logstash template, then what you need is to add template_name => "myname" to your elasticsearch output block. That will put the template under its own name.

Hi Aaron... thanks for helping. I got past the 400 error. Awesome!

When I look at the template I see the following which matches the template. However when goto Kibana 4 Discover it gives me a warning that this field is "analyzed" even though the template says its not. Example: "str_type_not_analyzed" field.

http://localhost:9200/_template/maptest?pretty

{
"maptest" : {
"order" : 1,
"template" : "maptest-*",
"settings" : { },
"mappings" : {
"properties" : {
"str_type_analyzed" : {
"index" : "analyzed",
"type" : "string"
},
"pk_col" : {
"index" : "not_analyzed",
"type" : "string"
},
"int_type" : {
"index" : "not_analyzed",
"type" : "integer"
},
"float_type" : {
"index" : "not_analyzed",
"type" : "float"
},
"str_type_not_analyzed" : {
"index" : "not_analyzed",
"type" : "string"
},
"dt_type" : {
"format" : "yyyy MM dd HH:mm:ss:SSS",
"index" : "not_analyzed",
"type" : "date"
}
}
},
"aliases" : { }
}
}

So it looks like the template is there but not being used. Probably still using the default logstash template.

Is Kibana sampling indices named maptest-*? Or is it viewing default Logstash indices?

Also, the mapping may still be tied to an older index. Did you delete and recreate the index after the template was uploaded?

Also, see this example template for ideas on how to use template mappings and wildcards:

{
  "template" : "logstash-*",
  "settings" : {
    "index.refresh_interval" : "5s"
  },
  "mappings" : {
    "_default_" : {
       "_all" : {"enabled" : true, "omit_norms" : true},
       "dynamic_templates" : [ {
         "message_field" : {
           "match" : "message",
           "match_mapping_type" : "string",
           "mapping" : {
             "type" : "string", "index" : "analyzed", "omit_norms" : true
           }
         }
       }, {
         "string_fields" : {
           "match" : "*",
           "match_mapping_type" : "string",
           "mapping" : {
             "type" : "string", "index" : "analyzed", "omit_norms" : true,
               "fields" : {
                 "raw" : {"type": "string", "index" : "not_analyzed", "doc_values" : true, "ignore_above" : 256}
               }
           }
         }
       }, {
         "float_fields" : {
           "match" : "*",
           "match_mapping_type" : "float",
           "mapping" : { "type" : "float", "doc_values" : true }
         }
       }, {
         "double_fields" : {
           "match" : "*",
           "match_mapping_type" : "double",
           "mapping" : { "type" : "double", "doc_values" : true }
         }
       }, {
         "byte_fields" : {
           "match" : "*",
           "match_mapping_type" : "byte",
           "mapping" : { "type" : "byte", "doc_values" : true }
         }
       }, {
         "short_fields" : {
           "match" : "*",
           "match_mapping_type" : "short",
           "mapping" : { "type" : "short", "doc_values" : true }
         }
       }, {
         "integer_fields" : {
           "match" : "*",
           "match_mapping_type" : "integer",
           "mapping" : { "type" : "integer", "doc_values" : true }
         }
       }, {
         "long_fields" : {
           "match" : "*",
           "match_mapping_type" : "long",
           "mapping" : { "type" : "long", "doc_values" : true }
         }
       }, {
         "date_fields" : {
           "match" : "*",
           "match_mapping_type" : "date",
           "mapping" : { "type" : "date", "doc_values" : true }
         }
       } ],
       "properties" : {
         "@timestamp": { "type": "date", "doc_values" : true },
         "@version": { "type": "string", "index": "not_analyzed", "doc_values" : true },
         "clientip": { "type": "ip", "doc_values" : true },
         "geoip"  : {
           "type" : "object",
           "dynamic": true,
           "properties" : {
             "ip": { "type": "ip", "doc_values" : true },
             "location" : { "type" : "geo_point", "doc_values" : true },
             "latitude" : { "type" : "float", "doc_values" : true },
             "longitude" : { "type" : "float", "doc_values" : true }
           }
         }
       }
    },
    "nginx_json" : {
      "properties" : {
        "duration" : { "type" : "float", "doc_values" : true },
        "status" : { "type" : "short", "doc_values" : true }
      }
    }
  }
}
1 Like

I will give this a try.

Aaron, I tried with the above template. Stuck with the 400 error. If there is something else I need to share let me know. I can dump the template if needed. Thanks, Frank.

from my config:
output
{
elasticsearch
{
template_overwrite => true
template_name => "maptest"
manage_template => true
template => "c:/elasticsearch-1.6.0/config/templates/maptest/maptest.json"
host => "localhost"
index => "map-test"
workers => 1
}

it reads the template. from my log:
{:timestamp=>"2015-08-10T16:24:58.045000-0500", :message=>"Automatic template management enabled", :manage_template=>"true", :level=>:info}
{:timestamp=>"2015-08-10T16:24:58.232000-0500", :message=>"Using mapping template", :template=>{"template"=>"map-*", "mappings"=>{"default"=>{"_all"=>{"enabled"=>false}, "dynamic_templates"=>#Java::JavaUtil::ArrayList:0x414c1f60, "properties"=>{"pk_col"=>{"type"=>"string", "doc_values"=>true, "index"=>"not_analyzed"}, "dt_type"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "int_type"=>{"type"=>"integer", "doc_values"=>true}, "float_type"=>{"type"=>"float", "doc_values"=>true}, "str_type_analyzed"=>{"type"=>"string", "index"=>"analyzed", "doc_values"=>true}, "str_type_not_analyzed"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}}}}}, :level=>:info}

Then I get the 400 error:
{:timestamp=>"2015-08-10T16:24:59.433000-0500", :message=>"failed action with response of 400, dropping action: ["index", {:_id=>nil, :_index=>"map-test", :_type=>"test1", :_routing=>nil}, #<LogStash::Event:0x6e23664f @metadata_accessors=#<LogStash::Util::Accessors:0x423696a3 @store={"retry_count"=>0}, @lut={}>, @cancelled=false, @data={"message"=>....

I can't resolve a permissions error remotely.

What do you see if you try to delete it via curl?

curl -XDELETE localhost:9200/_template/maptest?pretty

{
"acknowledged": true
}

when I run logstash it creates the template, but does not create the index (hence the 400 error).

I can't help with that. That sounds like something between Logstash and Elasticsearch, or within Elasticsearch.

Hi Aaron,

Persistence pays off or maybe just that I took a break. Anyways... I got it to work :slight_smile:

I boiled the template down to the bare essentials. It appears when I added the "order": 1 to the template it picks up the data types definition b/c it is a higher priority over the default logstash one.

As for the 400 error it must have been something logstash/ES does not like. I have seen quite a few others write about the same. Wish the error message could provide clues where to look.

Here is the template that worked:
{
"template" : "maptest",
"order": 1,
"mappings" : {
"default" : {
"_all" : {"enabled" : false},
"properties" : {
"pk_col": { "type": "string", "index": "not_analyzed","doc_values" : true },
"dt_type": { "type": "date", "index": "not_analyzed", "format": "yyyy-MM-dd HH:mm:ss.SSS", "doc_values" : true},
"int_type": { "type": "integer","doc_values" : true },
"float_type": { "type": "float","doc_values" : true },
"str_type_analyzed": { "type": "string", "index": "analyzed" },
"str_type_not_analyzed": { "type": "string", "index": "not_analyzed" ,"doc_values" : true }
}
}
}
}

The config file:

elasticsearch
{
template_overwrite => true
template_name => "maptest"
manage_template => true
template => "c:/elasticsearch-1.6.0/config/templates/maptest/maptest.json"
host => "localhost"
index => "maptest"
workers => 1
}

1 Like

To complement your answer, it seems that the settings section in the template is causing the error 400.
I successfully reproduced your issue on Elasticsearch / Logstash 5.2.1 and got rid of error 400 by removing the settings section:

"settings" : {
    "index" : {
      "refresh_interval" : "5s"
    }

Hope this helps!

1 Like

Removing settings got me past the 400 error.... thx!

1 Like