Welcome to the community @Ibicf and thank you for posting your query.
As I understand, you are pushing your application logs directly to Elasticsearch which running in a separate container, which is OK.
In a real world scenario, you generally just store the logs in a log file of your application, also running as a container, and configure a log shipper like filebeat to read all the log files from that server (or node in terms of Elastic). Filebeat essentially reads all the log files of all the applications and push them to Elasticsearch.
About your queries:
- Logs stored in Elasticsearch are saved into multiple segment files depending on number of dictionaries defined in your index. As such you cannot read or write to those files unless you are working with native Apache Lucene library which is implicitly used by Elasticsearch. The path to data directory (where segment files are stored) is configured in your elaticsearch.yml configuration file.
- For exporting the logs to Elasticsearch, you can few options in hand depending on if your logs are Time Series Data (TSD) i.e. logs are being written into the log files continuously - for this you can use filebeat. If you just want to store log dump or Static Data which is not being touched/ written on anymore, you can use either filebeat or logstash. For both cases, you need to first untar/unzip your log files to a directory and configure beats/stash to read all the log files from the directory.
For more information, please read the Elastic documentation carefully and follow the steps to configure your log agents.