Those are numbers which really depend on your usage (ingestion but also search/queries/aggregations), your data, your hardware and only way to answer is by testing...
For JVM Heap :
With 30GB RAM on the one-node cluster, you can go to up to 50% so up to 15Gb... You could start with 4 or 8Gb and then see if this is enough (if you have no issue related to lack of memory). Memory not used by the ES JVM will be used for file system cache so there is no wasted RAM.
Also note that increasing the heap size means garbage collection is slower (depending on number of CPUs you have), during garbage collection the system won't respond so you want to factor this in
For sizing, there is no quick answer and best is to use capacity planing document, and often a good size of primary shards for performance is 30 to 50Gb per shard but it could be lower or higher depending on your ingestion/search SLA and the usage : https://www.elastic.co/guide/en/elasticsearch/guide/current/capacity-planning.html
If one day of data is very small so and you are short on resources (CPU and Disk I/O - especially if your disk is a spinning disk as opposed to SSD - also comment is around having a one node cluster), you might go for weekly/monthly indices so you have fewer shards on your node, or add more nodes to your cluster.
Note number of primary shards cannot be changed without reindexing, but in case of daily indices, you will use a template so you could increase/decrease the value for new indices so you can easily review number of shards later if you have too few with one primary shard...