I am trying to upgrade from logstash 1.4.2 to logstash 2.1.1 and I have a problem when a new index gets created. I tried to reduce all configs to the minimum and now I have the following:
The most important piece is my index_template.json:
{
"template" : "logstash-*",
"settings" : {
"index.refresh_interval" : "5s",
"index.number_of_shards" : 5,
"index.number_of_replicas" : 1
},
"mappings" : {
"_default_" : {
"_all" : {"enabled" : true},
"dynamic_templates" : [ {
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
}
}
} ],
"properties" : {
"@version": { "type": "string", "index": "not_analyzed" },
"receive_time" : {
"type" : "string"
}
}
}
}
}
I am forcing the "receive_time" to be a string. However with logstash 2.1.1 this field is sometimes a "string" and sometimes a "date"
My logstash config:
input {
file {
path => "/tmp/logfile"
sincedb_path => "/tmp/logfile.sincedb"
start_position => "end"
}
}
filter {
mutate {
#copy timestamp. In the index_template this field is forced to be a "string"
add_field => [ "receive_time", "%{@timestamp}" ]
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
flush_size => 100
workers => 2
template => "index_template.json"
template_overwrite => true
}
}
How to reproduce:
- install elasticsearch 1.5.1 e.g. "docker pull elasticsearch:1.5.1" and run it
- run logstash 2.1.1 with the above mentioned config and index_template
- create log entries:
for i in $(seq 1000); do echo "line number $i"; done >>/tmp/logfile
Now check the type of the field "receive_time". This should be a string.
- stop elasticsearch and delete all data (e.g. remove the container and create a new one)
- repeat creating log entries
Check the type of the field "receive_time". Now this field is of type "date".
If you do the same steps with logstash 1.4.2 this never happens. Maybe because "receive_time" is always a string in this version.
Obviously I don't delete all my elasticsearch data daily. In my stage environment this happens when a new index gets created. Sometimes it is a "string" and sometimes it's a "date". In the latter case I get errors in elasticsearch like this:
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [receive_time]
Caused by: org.elasticsearch.index.mapper.MapperParsingException: failed to parse date field [2016-02-03 12:40:32 +0100], tried both date format [dateOptionalTime], and timestamp number with locale []
Caused by: java.lang.IllegalArgumentException: Invalid format: "2016-02-03 12:40:32 +0100" is malformed at " 12:40:32 +0100"