Loaded template is not appearing in Elastic

Hello,
I am currently trying to load existing template from Previous Elastic version 2.* to the new nodes with version 5.*.
Using this command:

C:\Users\admin> Invoke-RestMethod -Uri 'http://localhost:9200/_template/nlog' -Method 'PUT' -infile c:\nlog.json

and getting the result:

But it is absent in management and i can not see it in dev tools it to check "get _cat/templates"

Please advise what is wrong?

Thanks.

Please do not post screen shots of text as it can be very hard to read (it is in this case). Mappings have changed quite a bit between Elasticsearch 2.x and 5.x. What does the index template you are trying to store look like?

It is default index template which is located in Logstash folder. "elasticsearch-template-es5x.json"
sometimes it can be created, sometimes not.
But currenclty i can not create it neither curl Invoke-RestMethod -Uri 'http://localhost:9200/_template/nlog' -Method 'PUT' -infile c:\elasticsearch-template-es5x.json

nor via Dev tools .

put _template/logstash-*

{
"template" : "logstash-",
"version" : 50001,
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"default" : {
"_all" : {"enabled" : true, "norms" : false},
"dynamic_templates" : [ {
"message_field" : {
"path_match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text",
"norms" : false
}
}
}, {
"string_fields" : {
"match" : "
",
"match_mapping_type" : "string",
"mapping" : {
"type" : "text", "norms" : false,
"fields" : {
"keyword" : { "type": "keyword" }
}
}
}
} ],
"properties" : {
"@timestamp": { "type": "date", "include_in_all": false },
"@version": { "type": "keyword", "include_in_all": false },
"geoip" : {
"dynamic": true,
"properties" : {
"ip": { "type": "ip" },
"location" : { "type" : "geo_point" },
"latitude" : { "type" : "half_float" },
"longitude" : { "type" : "half_float" }
}
}
}
}
}
}

Haver you tried giving it a name that does not contain a wildcard?

yes i have tried. result is the same:

put _template/logstash-2017.03.06

{
"error": {
"root_cause": [
{
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
],
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
},
"status": 500
}

Managed create logstash-* index (application created it for me, not me personally) but only after deleting the whole logstash and reinstalling it from the scratch.
Guys, i am sorry to say but your product is to difficult and complicated for usage. And after even reading many hours manuals it is still unclear how it works.

now i can see this index with x_ prefix. (yellow open logstash-2017.03.06 x_jbWsQiSJmusBv7GQCEVg)
What does it mean?

and how i should create the rest of indices? I need two more indices for another applications but i can not create them because of this error.

Updated up to ELK 5.2.2. and got the same error.
Guys what is happening? Why i can not create indeces folowing your docs? https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

Please explain this because i do not understand why it does not work:

[2017-03-06T21:31:40,719][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-03-06T21:31:40,849][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-03-06T21:31:40,864][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x7250596a URL://localhost:9200>]}
[2017-03-06T21:31:40,870][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-03-06T21:31:40,874][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-06T21:31:40,927][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

As i can see from Logstash logfile its template was instaled. Then please tell me why i still can not see this template instaled in Kibana\Elasticsearch?

and there

Have you indexed any data into Elasticsearch? Can you provide the output from the cat indices API?


i have indexed some data and looks like time field appeared and now i can create index template.

yellow open logstash-2017.03.06 UXxhdZsqT2a_4uNZgCJIOQ 5 1 3418 0 885.9kb 885.9kb

BUT! it works only for the case of data from Azure blob where i have many log files (thousands of logs). But if to index only 1 small file locally it still doesn`t work, does not appear. Is there any limit on the data which should be indexed to make index work? And i thought that i should first create an index and only then start to index data but not vise versa. And should i have the same config on all the ELK nodes or i can use on each node different configs for different input sources for different indexes to avoid one big general comprehensive config on all the nodes which is not so easy to make work? Thanks in advance!

It seems like you're confusing indexes with index templates, which could partly explain why you're having difficulties.

Is there any limit on the data which should be indexed to make index work?

No.

And i thought that i should first create an index and only then start to index data but not vise versa.

Indexes will automatically be created as needed based on the documents you're indexing.

And should i have the same config on all the ELK nodes or i can use on each node different configs for different input sources for different indexes to avoid one big general comprehensive config on all the nodes which is not so easy to make work?

Are you talking about the Logstash configuration? If yes, then it's up to you. Regardless of what you choose things will get complex but in different ways.

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.