So in short the data is written into memory and structured as segments and then flush to disk once that happens the translog which is also persisted on disk can be cleared and then the process starts over again.
Is there something in particular that's concerning you?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.