I am using the following pipeline to do an import of an index exported from Elastic:
- pipeline.id: import-process
pipeline.workers: 4
config.string: |
input {
file {
path => "/usr/share/logstash/export/export_metricbeat-7.17.7-2023.03.21-000001.json"
codec => "json"
mode => "read"
exit_after_read => true
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
manage_template => true
template => "/usr/share/logstash/config/metricbeat.template.json"
template_name => "metricbeat-7.17.7"
template_overwrite => true
index => "metricbeat-7.17.7-2023.03.21-000001"
ssl => "false"
}
}
The metricbeat template is as follows:
{
"index_patterns": [
"metricbeat*"
],
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "10000"
}
}
}
}
}
It just increases the limit on the number of fields for the index. When I run logstash, I can see the template gets loaded:
[2023-03-22T08:05:17,061][INFO ][logstash.outputs.elasticsearch] Installing Elasticsearch template {:name=>"metricbeat-7.17.7"}
I can see the template in Kibana. However, when logstash starts importing documents, it issues warnings about exceeding the number of fields:
"reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [10
00] has been exceeded while adding new fields [1]"}}}}}
If I create a template in Kibana manually (not a legacy template, just a regular template). I used the same settings, and no other options selected. When I do it this way, I don't get the error on the import.