When you store logs in Elasticsearch you do not generally store the whole file as a unit (unless they are very small), but rather store lines individually as separate documents. You can use a multiline processor to group related lines into a single document though.
Elasticsearch accepts JSON documents, so you need something, e.g. Filebeat or Logstash, that converts log lines into documents that can be indexed.
I do not understand what you mean. You need to explain in more detail and provide an example or show the flow you are expecting. Where are logs coming from? Where are they moved and how are they processed?
If the client sends the raw file to the server, set up Filebeat and/or Logstash on the server and index the data into Elasticsearch as I described earlier. For a practical guide on how to do this with Logstash, have a look at this blog post. It is old but I think it is still generally valid.
You will not store whole log files in Elasticsearch, so if you need to do this I suspect you need to copy the files into appropriate storage, e.g. using a custom script. That is not something that Logstash or Filebeat does as far as i know.
If you are looking for Elasticsearch to store complete log files, I suspect you are looking at the wrong tool.
Write a script that transforms the log into one big string and index that into a single field in the index. Possible but pretty useless because searching through that or performing any aggregation becomes really slow.
Probably better to structure that log into specific fields and as @Christian_Dahlqvist pointed out split it into multiple fields. One log per document, every document different fields like "LogCreationTime; LogErrorText; user; ip; errorcode" etc. Whats the use in having all the logs in there and not beeing able to use them.
Yea sure, logs can become pretty big. But I am on "your side" anyway Its stupid to store logs in one huge chunk that slow to search and impossible to visualize. Data is useless if its not stored nice and tidy.