Index unlikely big with merging still on 2 shards


1 Index particularly for 1-1-2017 became 4X in size of normal trend for that index.
Out of 5 shards, 2 show merge current > 0 still as I am writing this on 2nd Jan.

merges: { current: 2current_docs: 6566907current_size_in_bytes: 943816800...

This has caused queries on that index unresponsive. Also load_average on all 4 elastic instances in the cluster shooted up and still up since 5:30 am 1-1-2017. elastic search java process is using 100 cpu on all boxes.

The translog for this index is also growing. As we understand it shouldn't, since a new index for 2nd Jan has been created.

Should we flush translog ? Should we stop merging ?

Directions will be appreciated.

This was because of leap second issue.
Duplicate records were getting added each around 4.5k times.
When data was stopped from the agent sending it, the merging on index for 1st jan stopped.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.