How to re-index if I have changed the shard numbers?


I am newbie to ELK. I have set up the ELK stack on two Windows boxes; one is for ES and Kibana and other Logstash. I have configured the ES with default config therefore, 5 shards were created though I am using only one node. Now if I change the shard number to 1, I may need to re-index the data, right? If yes, how can I do that?

If you want the change to apply to existing indices, yes.

Check out

The following is our logstash-indexer.cfg. How can I edit it for re-indexing. As I have mentioned that I know very little about ELK, please excuse my stupid questions.

input {

tcp {
codec => "json"
port => 5544
tags => ["windows","nxlog"]
type => "nxlog-json"
input {
#udp Exchange syslogs stream via 5544

udp {
type => "Exchange"
port => 5544
} # end input

I have filters for nxlog-json and Microsoft Exchange... I am not mentioning here the whole but references only.

filter {

if [type] == "nxlog-json" {
date {
match => ["[EventTime]", "YYYY-MM-dd HH:mm:ss"]
# timezone => "Europe/London"

if [type] == "Exchange" {
csv {
add_tag => [ 'exh_msg_trk' ]
columns => ['logdate', 'client_ip', 'client_hostname', 'server_ip', 'server_hostname', 'source_context', 'connector_id', 'source', 'event_id', 'internal_message_id', 'message_id', 'network_message_id', 'recipient_address', 'recipient_status', 'total_bytes', 'recipient_count', 'related_recipient_address', 'reference', 'message_subject', 'sender_address', 'return_path', 'message_info', 'directionality', 'tenant_id', 'original_client_ip', 'original_server_ip', 'custom_data']
remove_field => [ "logdate" ]
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp}" ]
output {
elasticsearch {
host => ""
protocol => "http"
# stdout { codec => rubydebug}
} # end output

As soon as I changed the shards, now Kibana is asking a new pattern.

Hi @warkolm, I edited the elasticsearch-template.json to have 'number_of_shards: 1'. I re-started ES. I assume that only a new daily index will be created with 1 shard? How can I now use your logstash script to re-index into the existing daily indices? Or is this not possible since the existing indices are already configured to have the default 5 shards each?

Many thanks.

Please start a new thread for your question :slight_smile:

@warkolm Changing shard number per index due to EsRejectedExecutionException :slight_smile: