If you have got logstash running successfully, you would have seen that the actual indexing is to be done by Logstash and not by you running the curl command uploading a log file. As Mark mentioned before, Elasticsearch accepts data in JSON form only so people usually use scripts or something like logstash or fluentd to read and transform data from their formats (like logs, in your case) into an appropriate format which reflects what you want to do with it in Elasticsearch. With logstash, the simplest config would just take the log file, parse each line (assuming you have a log per line) and wrap it in a JSON with a timestamp and a message field. You mention you are using log4j so it won't always be single line per log record (e.g. when errors occur, the stack traces go over multiple lines). The logstash file input config can be configured to use the multiline codec and separate log records based on a regex pattern.
To see how the multiline codec can be used, see https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html. The example shows stin input but will probably work with the file input as well.
Once you've tested a logstash config, all you really need is to have logstash running and monitoring a directory of log files and it should take care of indexing JSON objects into Elasticsearch.
You might be interested in the getting started with logstash page in the docs. https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
Long term, you're welcome to sign up for the training although a lot of the information is present on the site: https://www.elastic.co/guide/index.html
Hope this helps.