In Nginx, I've converted my log_format to json and sending the logs via Filebeat to logstash. I'm trying to set some of my fields to not_analyzed
but nothing I'm doing is working. Almost all the fields from nginx logs are being set to analzyed
. I'm testing out ELK so I've been repeatedly removing all the indexes and starting from scratch.
Maybe I'm misunderstanding the logstash output template
.
My logstash conf:
elasticsearch {
hosts => ["xx.xx.xx.xx:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[metadata][type]}"
template_name => "filebeat-*"
template => "/etc/logstash/mappings/filebeat.json"
}
I'm using the default json template that comes with filebeat.
{
"mappings": {
"_default_": {
"_all": {
"enabled": true,
"norms": {
"enabled": false
}
},
"dynamic_templates": [
{
"template1": {
"mapping": {
"doc_values": true,
"ignore_above": 1024,
"index": "not_analyzed",
"type": "{dynamic_type}"
},
"match": "*"
}
}
],
"properties": {
"@timestamp": {
"type": "date"
},
"message": {
"type": "string",
"index": "analyzed"
},
"offset": {
"type": "long",
"doc_values": "true"
}
}
}
},
"settings": {
"index.refresh_interval": "5s"
},
"template": "filebeat-*"
}