Logstash 2.1.0 stop creating raw fields

Hello,

I've upgrade all my client servers with Logstash 2.1.0 and then I wipe out all data in /var/lib/elasticsearch/ on Elasticsearch server in order to start form scratch with indices. After that I can't see the raw fields anymore in Kibana.
When I run curl -XGET 'http://esserver:port/logstash-myindex-2015.12.08/_mapping?pretty=true' the raw fields doesn't show.

How can I get back the raw fields?

Thanks,
Danilo

Don't do that, you should always use the APIs to delete indices.

Did you change the mapping to include the .raw fields? If not then you need to!

Hello Warkolm, how can I change the mapping to include the raw fields?

I have this "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.1.4-java/lib/logstash/outputs/elasticsearch/elasticsearch-template.json" on server with Logstash installed but I am not using the elasticsearch java output plugin but the embeded elasticsearch plugin.

Thanks

Running curl -XGET elasticsearch-server:port/_template?pretty=true
I got this results:

{
"logstash" : {
"order" : 0,
"template" : "logstash-",
"settings" : {
"index" : {
"refresh_interval" : "5s"
}
},
"mappings" : {
"default" : {
"dynamic_templates" : [ {
"message_field" : {
"mapping" : {
"fielddata" : {
"format" : "disabled"
},
"index" : "analyzed",
"omit_norms" : true,
"type" : "string"
},
"match_mapping_type" : "string",
"match" : "message"
}
}, {
"string_fields" : {
"mapping" : {
"fielddata" : {
"format" : "disabled"
},
"index" : "analyzed",
"omit_norms" : true,
"type" : "string",
"fields" : {
"raw" : {
"ignore_above" : 256,
"index" : "not_analyzed",
"type" : "string",
"doc_values" : true
}
}
},
"match_mapping_type" : "string",
"match" : "
"
}
}, {
"float_fields" : {
"mapping" : {
"type" : "float",
"doc_values" : true
},
"match_mapping_type" : "float",
"match" : ""
}
}, {
"double_fields" : {
"mapping" : {
"type" : "double",
"doc_values" : true
},
"match_mapping_type" : "double",
"match" : "
"
}
}, {
"byte_fields" : {
"mapping" : {
"type" : "byte",
"doc_values" : true
},
"match_mapping_type" : "byte",
"match" : ""
}
}, {
"short_fields" : {
"mapping" : {
"type" : "short",
"doc_values" : true
},
"match_mapping_type" : "short",
"match" : "
"
}
}, {
"integer_fields" : {
"mapping" : {
"type" : "integer",
"doc_values" : true
},
"match_mapping_type" : "integer",
"match" : ""
}
}, {
"long_fields" : {
"mapping" : {
"type" : "long",
"doc_values" : true
},
"match_mapping_type" : "long",
"match" : "
"
}
}, {
"date_fields" : {
"mapping" : {
"type" : "date",
"doc_values" : true
},
"match_mapping_type" : "date",
"match" : ""
}
}, {
"geo_point_fields" : {
"mapping" : {
"type" : "geo_point",
"doc_values" : true
},
"match_mapping_type" : "geo_point",
"match" : "
"
}
} ],
"_all" : {
"omit_norms" : true,
"enabled" : true
},
"properties" : {
"@timestamp" : {
"type" : "date",
"doc_values" : true
},
"geoip" : {
"dynamic" : true,
"type" : "object",
"properties" : {
"ip" : {
"type" : "ip",
"doc_values" : true
},
"latitude" : {
"type" : "float",
"doc_values" : true
},
"location" : {
"type" : "geo_point",
"doc_values" : true
},
"longitude" : {
"type" : "float",
"doc_values" : true
}
}
},
"@version" : {
"index" : "not_analyzed",
"type" : "string",
"doc_values" : true
}
}
}
},
"aliases" : { }
}
}

I my template the raw fields, teorically, should be created, right?

I am a complete noob.

In my Lostash conf I have some servers with:
output {
elasticsearch {
hosts => "es-server:port"
index => "logstash-pool-ldapspo-cons-%{+YYYY.MM.dd}"
}
}

and others with:
output {
if "127.0.0.1" not in [message] {
elasticsearch {
hosts => "es-server:port"
index => "logstash-haproxy-spo-prov-%{+YYYY.MM.dd}"
}
}
}

Ok, so with these configuration I don't get the raw fields on Kibana, but if I do this:

curl -XPUT http://es-server:port/_template/logstash -d '
{
"template" : "logstash-",
"settings" : {
"index.refresh_interval" : "5s"
},
"mappings" : {
"default" : {
"_all" : {"enabled" : true, "omit_norms" : true},
"dynamic_templates" : [ {
"message_field" : {
"match" : "message",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
}
}
}, {
"string_fields" : {
"match" : "
",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "ignore_above" : 256}
}
}
}
} ],
"properties" : {
"@version": { "type": "string", "index": "not_analyzed" },
"geoip" : {
"type" : "object",
"dynamic": true,
"properties" : {
"location" : { "type" : "geo_point" }
}
}
}
}
}
}
'

And if go to Kibana and create a new index called "logstash-" then a I have the raw fields. But if I create and index called "logstash-haproxy-spo-prov-" I don't get it.

What am I missing here?

I don't know what is happened with this environment which the raw field was not being created, so I do a fresh install on all Logstash servers and the Elasticsearch server and now the raw field are being created normally.

Danilo

That's probably because the dynamic template that comes default with logstash (matches index "logstash-*") was being overruled by another template with higher order whose pattern matches "logstash-haproxy-spo-prov-". In your old environment, did you create any new template?

No, I don't. I just reinstalled everything.