Translog has become corrupted and why would this happen. Can close system file cache fix the Problem Completely?
"details": "failed shard on node [aTbLQpDwTL204ASeorrvsA]: shard failure, reason [failed to recover from translog], failure EngineException[failed to recover from translog]; nested: EOFException[read past EOF. pos  length:  end: ]; ",
The problem comes almost every time after power off system while inserting data into elasticsearch
But after I try to close system file cache, the problem seems to disappear. Does't it really work? Who can analysis principly?
The usual explanation is that your storage is not working correctly and is acknowledging writes before they have completed. This is a trick that lower-grade storage sometimes uses to improve its performance numbers at the expense of your data.
What do you mean "close system file cache"?
Yes, "close system file cache" by using linux command "hparm -W"
Do you mean It`s problem of storage Or if something wrong in "writing translog"?
If "close system file cache" can solve the problem.
What type of storage are you using? Is it some kind of network attached filesystem?
What kind of parameter you mean?
What type of hardware is the cluster deployed on?
We try HDD and SSD. Same problem.
This disables the write cache and does indeed indicate that your disk is lying to Elasticsearch and acknowledging writes before they have completed.
If It means that hparm can solve the problem.Are there something else cause corrupted translog.
where can I find flow path of writing translog.
No, this does not indicate a problem in Elasticsearch or elsewhere. It indicates that your disks have a volatile write cache that loses data on a power loss.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.