Incrementing the counter requires reindexing the whole document. In a
system like postgresql that just involves copying the row, probably to the
same block. Usually you don't even need to mess with indexes. Elasticsearch
doesn't work like that. It has to reanalyze all the fields and eventually
build a new segment, inverted index, doc values, and all.
Relational databases make tradeoffs to make updates faster. Elasticsearch
makes tradeoffs to make aggregations and full text search faster. Those
tradeoffs are baked in at a fairly deep level.
That said 160ms is quite a bit. You have lots of fields in that document?
Network between the shards slowish? I dunno, hard to say.