Dealing with gzipped multilined log files (memory problems)

I used an exec input with a command to parse the topmost file in a directory every x seconds via zcat (deleting it after I finish), then use the multiline codec to parse it line by line into a json format. I should mention these .gz files are ~4mb in size.

When I parse the first file, logstash is sitting around 270mb of ram usage (and steadily increases by ~10kb per second). Once I hit the second file, logstash spikes to over 400mb, and I would assume continues to increase for the next files.

Why is this and how do I stop it?