Index Migration Strategy

Hi, I'm wondering if there's a way to migrate to a new ES index without shutting down logstash on our web servers.

I'm using Logstash (ES to ES configuration) to migrate data to a new index. I normally use the reindex function but this time I needed the useragent filter. Anyway, in my experience the only way to migrate to a new index without loosing any docs is to shut down logstash on each of our nginx servers. Thus the source/old index is not in a state of flux.

Here's what I do:
a) shut down logstash (logfile to ES config) on all our web servers
b) start logstash (ES to ES config) on my ES cluster which migrates to the new index
c) when migration is complete, shut down logstash (ES to ES config)
d) point my alias to the new index
e) start logstash (points to the alias) on all our web servers

This works well, but the problem with this strategy is that logstash is down for a few hours. Is there anyway around this?

If you were using a broker (kafka, redis etc) you could leverage that, but in this case you can't work around the need to shut things down :frowning:

Can you explain a little more about the index structure you are using, specifically how the alias is setup.

Thanks Mark;

It's simple in that one alias points to one index. Logstash forwards to the alias therefore I don't need to update that.

In the short term I've been using Ansible to shut down/startup logstash on all nginx servers before/after a migration.

I'll look into using a broker as you mentioned, that sounds like a good long term plan.

Are you using time based indices under the hood?

Nope, no time based indices.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.