With 3 TB disks on each node, per default you would be able to use 85% of this size to store your data, this would give you something like 2.5 TB per disk, which in total would give you 10 TB of usable space.
So, the first thing you should do to change the watermark levels used by your cluster to increase the amount of usable disk space.
To make things easier, let's assume that you can use all the 3 TB of each disk.
Even using the entire 12 TB you still wouldn't be able to store 90 days of data with replicas considering 100 GB per day, you would need at least 18 TB.
You would need then to reduce the size of your indices and there are a couple of ways to do that, the first one, which I would consider mandatory, is to check your mappings.
If you are using dynamic mappings in your indices you probably are wasting space because string fields are mapped twice, as keyword and text, so you would need to check your data and map your fields accordingly to their use, this can help you reduce the index size.
Also, you didn't provide any information on how you are indexing your data, but another thing that helps is if you are storing the original message after parsing it or not, I would suggest that if you are storing the original message, you start removing it after parsing, this also can reduce the size of your index (by a huge percent in most cases).
Another option would be to change the compression of the index after sometime, for example, using ILM to change the compression after 30 days.
I'm not sure that it would help much as this depends a lot on your data and the default compression is already pretty good, but you will need to try it, just keep in mind that it will also have some impact in the performance.
There is no magic, with your requirements you need at least 18 TB of disk, which you do not have, so you will need to test some things to see if this reduces the size of your indices and this is assuming that your daily data ingestion will not change.