I am using ES 7.12.0 and APM 7.8.1. Today, I want to upgrade APM to 7.14.0, but APM cannot automatically create a template, so I want to import it manually and encounter some problems.
./apm-server setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["192.168.10.140:9200"]' -E 'output.elasticsearch.username: elastic' -E 'output.elasticsearch.password: elastic'
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://192.168.10.140:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
First, I execute the following command, but return to http401. I need to enter the account and password. There is no mark on how to write the account and password in the document. docker run docker.elastic.co/apm/apm-server:7.14.0 setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["192.168.10.140:9200"]'
Then I use curl to import JSON. The ES server never returns results. After a period of time, it returns "curl: (52) empty reply from server"
APM still uses legacy index templates so to update the template it would have to use the following API:
http://192.168.10.140:9200/_template/apm-server-7.14.0 whereas your example points to the new Index Templates v2 API from Elasticsearch. Our documentation around export/import templates also still uses the legacy v1 API's:
When you initially went through the CLI setup the format to specify the Elasticsearch username and password was slightly off:
-E 'output.elasticsearch.username: USERNAME' where it should be: -E 'output.elasticsearch.username=USERNAME'
Causing the unauthorized message.
If you want to bootstrap everything through the CLI again you can use the following:
For the api v1 version you mentioned, I tried it, but it didn't work. It can be reproduced, and finally it prompts "[index_template] unknown field [mappings]".
1、./apm-server export template > apm-server.template.json
2、copy the content to kibana dev tools
{"log.level":"error","@timestamp":"2021-08-24T01:19:29.057Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/output.go","file.line":154},"message":"Failed to connect to backoff(elasticsearch(http://192.168.10.141:9200)): Connection marked as failed because the onConnect callback failed: error loading Elasticsearch template: failed to load template: couldn't load template: 400 Bad Request: {\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"No type specified for field [headers]\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"Failed to parse mapping [_doc]: No type specified for field [headers]\",\"caused_by\":{\"type\":\"mapper_parsing_exception\",\"reason\":\"No type specified for field [headers]\"}},\"status\":400}. Response body: {\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"No type specified for field [headers]\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"Failed to parse mapping [_doc]: No type specified for field [headers]\",\"caused_by\":{\"type\":\"mapper_parsing_exception\",\"reason\":\"No type specified for field [headers]\"}},\"status\":400}","service.name":"apm-server","event.dataset":"apm-server","ecs.version":"1.6.0"}
@Martijn_Laarman I tried to upgrade the apm server to 7.14.0, but it didn’t work
{
"error": {
"root_cause": [{
"type": "mapper_parsing_exception",
"reason": "No type specified for field [headers]"
}],
"type": "mapper_parsing_exception",
"reason": "Failed to parse mapping [_doc]: No type specified for field [headers]",
"caused_by": {
"type": "mapper_parsing_exception",
"reason": "No type specified for field [headers]"
}
},
"status": 400
}
Whether it is manually executing the dsl statement or importing the template in other ways, the error always prompts "Failed to parse mapping [_doc]: No type specified for field [headers]". I checked that the dsl contains type: object. Why? Still showing error?
{"log.level":"info","@timestamp":"2021-08-25T01:48:01.445Z","log.origin":{"file.name":"template/load.go","file.line":229},"message":"Existing template will be overwritten, as overwrite is enabled.","service.name":"apm-server","event.dataset":"apm-server","ecs.version":"1.6.0"}
Do you have setup.template.overwrite: true set in your apm-server.yml?
If you have setup the template manually, then you should configure APM Server to not overwrite the template, and you should then no longer see these errors.
With the new APM integration for Elastic Agent, installation of index templates and pipelines will done by Fleet, managed through Kibana. We expect this to be much more user friendly, and less error prone.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.