How to "reload" mappings in Elasticsearch 1.7.4

I have a bunch of old indices which I'm trying to migrate to 2.x. Migration tool complained about dots in the middle of field names. I fixed my Logstash filters and already reindexed the documents. They no longer have dots in field names.

The problem now is that Elasticsearch index still has the mapping, although no document has such problematic fields.

How do I make Elastichsearch "reload" mappings from indexed documents?

Hi Igor,

I am not sure whether I understand your scenario correctly. You can't update mappings without reindexing.

The process is basically as follows. Suppose you have an index called logging.

  1. Create a new index called logging_v2
  2. Put the new mapping there
  3. Reindex all data from logging into logging_v2
  4. Point all your clients to logging_v2
  5. After you've verified everything is fine, you can remove or backup the old index

When step 3 is done, you should not need to touch the mapping because you've already created it before adding any data to it.

You can find detailed instructions in the definitive guide on how to migrate the mappings without downtime using aliases.


Hi Daniel,

I thought that was the case for 2.x, not for 1.7.x.

The step "3. Reindex all data from logging into logging_v2" is the one which puzzles me. Does ES provides me a command for that? Or do I need to a program to read data from ES and push it back into the new index?

PS: this seems such a "common" scenario. Why ES does not provide a reindex command?

Hi Igor,

I know that reindexing can be a painful process and there is no native support in Elasticsearch yet. You can follow the guidelines in the definitive guide but it boils down to querying the data from the old index and bulk indexing it into the new one.

Fortunately, a basic reindex API will probably be included with the next major version (see the related Github tickets for the basic support and reindexing API).


Got it. One last quick question: can I rename indexes? I want to rename something like logstash-2015-11-01 to logstash-2015-11-01.old and create the new one named logstash-2015-11-01.

Hi Igor,

yes and no. You can use aliases for that. I'll walk you through a small example:

Create a new index (we don't any mapping or the like here to not distract from the example..):

PUT /logstash_sample_idx_v1

Add an alias:

PUT /logstash_sample_idx_v1/_alias/logstash_sample

The key idea is that clients (applications) always use the name logstash_sample but never the name logstash_sample_idx_v1, i.e. they would refer to the alias but not to the index directly. This allows you to change the underlying index without changing the clients.

You can see which aliases are defined on an index:

GET /logstash_sample_idx_v1/_alias/*

Next we create a new index:

PUT /logstash_sample_idx_v2

and change the alias from v1 to v2:

POST _aliases
   "actions": [
         "remove": {
            "index": "logstash_sample_idx_v1",
            "alias": "logstash_sample"
         "add": {
            "index": "logstash_sample_idx_v2",
            "alias": "logstash_sample"

You can see here that the alias refers now to the v2 index:

GET /logstash_sample_idx_v2/_alias/*

I hope this gets you started. You can find more examples in the reference documentation on aliases.