Filebeat: how to archive data or reduce primary

I'm brand new to this, so sorry if I use a term wrong. I have set up filebeat on one client to send IIS logs directly to elasticsearch. The problem is I am getting hammered with data, about 500mb in 30minutes. My IIS logs do get pretty large, anywhere form 300mb-1GB in a day. Is there a way I can archive this data in elasticsearch or some other best practices? In the first 30minutes I have 624956 doc.count. I also noticed there are 5 primaries with about the same amount of data per. Are these copies of each other? Can I get that down to just 1 primary? Also, everything as far as configs for both elasticsearch and filebeat are at their defaults. Thank you for your help.

@timmorris83 question sounds more related to Elasticsearch. Moved discussion to Elasticsearch section.

@timmorris have you considered filtering out documents being indexed by filebeat? See exclude_lines/include_lines options in docs.

Check out Elasticsearch Curator Curator Reference [8.0] | Elastic

No, that is the entire set of data split into 5.

Thanks for this tip. Definitely has helped. Just need to tweak it some more. Initially I was going to try to use the IIS logs to understand response times my customers are seeing, but this doesn't seem like the right choice. Do you know of a beat that is made for web traffic analysis, such as response times?