Unable to reindex data

Hello Loggers,

I am unable to reindex in my elasticsearch setup. I have tow indeces, from which I want to reindex one to the other.

Index 1:
Field: "timestamp"
Type: "string"

"timestamp": {
"type": "string"
}

Index 2:
Field: "timestamp"
Type: "date"
Format: "yyyy-MM-dd HH:mm:ss,SSSZ"

"timestamp": {
"format": "yyyy-MM-dd HH:mm:ss,SSSZ",
"type": "date"
}

Timestamp sample: "2016-09-01 02:45:22,533-0700"

I have tried reindexing my data but my timestamp field still gives a "Mapping conflict!" issue. Any ideas on how I can solve this? Both of my indeces are under one Alias.

Probably some mal-formatted string in the first index timestamp field.

Why don't you create a mapping first, then read records from index 1 and put them to the new index. And catch any exception to see what the actual value is that's causing this issue.

1 Like

That's a good idea! I ended up solving my issue by reindexing using the force option, then deleting my old index. Because I believe reindexing only copies the records from one index to another, not migrate them correct?

Right!

We've kicked around the idea of not starting a reindex if the destination doesn't exist but it is way more complicate then it ought to be to implement so I'm kind of against doing it just for that reason. But, yeah, best practice is to create the new index first, mapping and all, before doing the reindex.

1 Like

Thanks Nik!

@nik9000 So must we delete the previous mappings of the previous index also?

If you are done with the old index then you should remove it entirely.

@nik9000 Ok, thanks Nik. So I have deleted the old index completely after reindexing my data to a new index. The only changes made to my new index are field type changes from "long" to "int". A new issue I am now facing is that now I am unable to send the new data to the new index ??? I am able to see that the data has reindexed successfully, but when attempting to send new data, Elasticsearch does not seem to pick it up. I have checked for error logs in logs from a Logstash "indexer" which I use to index data from Redis to ES, and there is no mapper exceptions or anything of that nature. The new index and the old both containe(d) the same alias, so the new data should appear in the alias, but it is not. I have also tried creating the index pattern in Kibana (Even though the alias should contin the index), but as expected, still no data. Any ideas? TIA

UPDATE: Even weirder, I see that the index I have deleted reappears after deletion. I have deleted about 3 times already, it seems to be "deleted" for about 20-30 minutes, then reappears in Elasticsearch. I must also mention that I am deleting the index from my master node. Any ideas? TIA

ANOTHER UPDATE: Furthermore, it seems that the old index is created when I trigger the logs that are sent to this particular index. However, in my new template for my new index, I have obviously given the new index name as the new index name value for the "template" field in the template.

I think two things are happening:

  1. Some process is adding data to the old index name directly, recreating it.
  2. The old index is getting the alias that you are trying to use to write to the new index. Elasticsearch will reject index operations to an alias that belongs to more than one index.

It is kind of hard to tell from here, but that is the direction I'd investigate at first.

1 Like

Aright, thanks Nik, appreciate it!