I am trying to parse Amazon S3 access log that has multiple chunks of the file. When i say multiple chunks i mean there are thousand of separate files with each files containing data in it. The data in these files are not JSON. I am having a tough time determining how to parse these files with logstash. Below is my logstash config i am using but apparently i do not see any data been logged with elasticsearch.
This is what i get now when i added a port to the logstash config. But what i dont understand with the above is that even if it is displaying that it is adding something to the elasticsearch but i do not see any index in elasticsearch been created.
The reason i have a old protocol is our current ES is on ES 1.5 and i did not wanted to upgrade it as of now so i started using a compatible version of ES.
When you say switch to http protocol, does this mean i should specify in protocol => "http" in output section of Logstash config.
After upgrading to LS 1.5.3 and adding protocol => http , the s3 logs were getting successfully ingested. Somehow the LS 1.5.0 which i had was getting crashed after few successful runs.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.