Best Translog configuration for ES 2.2.0

As described in the documentation, in 2.2.0 ver the Translog is fsync and commit after every request by default.
Is it better to set the index.translog.durability to async with sync_interval of 5s? Or maybe 10s, 30s?
This way the index.translog.fs.type.buffered will be used.
Why the default settings seems to be less efficient?

In my cluster there is new Index created every day with approximately 20M documents. I wish to understand the trade-off of the Translog settings and make the right choice for my case.
In the stress tests I can see that the CPU and IOPS is high...
So what would be the best configurations for it?

The default setting was changed in order to improve resiliency and data durability. If you are indexing individual documents you will notice the overhead, but the per-event overhead drops when using bulk inserts, which is common in most scenarios with very high indexing rates.

Indexing tends to be CPU and IO intensive as there generally is a lot of merging involved. You can reduce the amount of merging by changing the refresh interval, although this will delay your data being searchable.

1 Like

I do use bulk request insertations.
Still, why not to use the sync_interval?

You can change the durability mode and sync_interval, but will lose durability. I would recommend going through the link I provided first and optimise for merging and other parameters before addressing the transaction log. The amount of merging that need to be done, which causes a lot of IO activity, is not affected by the translog settings.

1 Like

Hi, Christian! If we use bulk api and bulk size (for sample) 20000 documents and have index with 20 shards, the initial bulk wil be split into 20 shard bulk request by ~1000 documents in each. So each shard receive 1000 elements and write then to translog, if it so fsync will hit the IOPS anyway. What is the recommended way to avoid this situation, revert back to async like in ES 1.x or increase bulk size, say in 10x ?

1 Like

It syncs the translog per bulk operation, so once for ~1000 records in your example. Is this causing a problem? Do you see a marked difference in IOPS when changing the translog behaviour? Have you tuned all other parameters related to indexing in order to optimise merging?

1 Like

We have very index intensive workload (non stop bulk indexing (20-40 threads, bulk size 20k, shards 48 with replica (durability)) in a daily index) so disk utilization spikes quite often. Using spinning disks (right now SSD hot/warm is not an option, sadly). In 1.7.x the situation was better, so I'm looking for reasonable way to reduce disk utilization.

1 Like

Why so many shards? What is the size of your cluster? have you followed the guidelines described here? What is your refresh interval set to?

20K bulk size also seems quite large - have you experimented with different bulk sizes to find the optimum size?

1 Like

We have to index and search ~15-17B small documents a day. Cluster consist of 24 data nodes (12 servers with 2 ES instances each = 24 data nodes). Each shard is around 30GB on disk (48 shards) and around 55GB (24 shards). Smaller shard number worse in terms or rebalance time and awful for recovery time.

As we have a problem with bulk queue (link) there was assumption is that index writer is single threaded for a shard and shard number should be closer to number of concurent bulk indexing threads. Another assumption was what we hit into some lucene limit in document count per shard so indexing latency increases.

As for refresh interval and other index settings:

"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "30s",
"bloom": { "load": "false" },
"number_of_shards": "48",
"number_of_replicas": "1",
"store": {
"throttle": { "type": "none" }
},
"merge": {
"policy": {
"reclaim_deletes_weight": "0.0"
}
},
"mapper": {
"dynamic": "false"
},
"ttl": {
"disable_purge": "true"
}
}
}

We use 2.1.2 (this guide a little outdated) and try different approaches and settings but always open to all advices.

As for 20K we are still try to fugure out best setting, it was set at 1.7.x and worked well. Now we will try no change it and see if there be any change.

1 Like