Feasible solution of snapshot for 30 GB and increasing every day data

Thanks Armin for reply, could you please help us to understand your statement:

"If you just take snapshots to the same repository, each snapshot will try to reuse as much data as possible from prior snapshots automatically."

What we are understanding, suppose there is one index which has initially three records as below.

    {
            "_index": "test_inx",
            "_type": "doc",
            "_id": "2",
            "_score": 1,
            "_source": {
              "Empid": "2",
              "Name": "BCD"
            }
          },
          {
            "_index": "test_inx",
            "_type": "doc",
            "_id": "1",
            "_score": 1,
            "_source": {
              "Empid": "1",
              "Name": "ABC"
            }
          },
          {
            "_index": "test_inx",
            "_type": "doc",
            "_id": "3",
            "_score": 1,
            "_source": {
              "Empid": "3",
              "Name": "EFG"
            }

Now, we take a snapshot of above index into snapshot-1.
On next day, few new records get insert (Records with EmpId with 4 & 5) into index and one record get update (record with EmpId 2) so final stage of index will be as below.

{
        "_index": "priority_inx",
        "_type": "doc",
        "_id": "2",
        "_score": 1,
        "_source": {
          "Empid": "2",
          "Name": "NEW_BCD"
        }
      },
      {
        "_index": "priority_inx",
        "_type": "doc",
        "_id": "1",
        "_score": 1,
        "_source": {
          "Empid": "1",
          "Name": "ABC"
        }
      },
      {
        "_index": "priority_inx",
        "_type": "doc",
        "_id": "3",
        "_score": 1,
        "_source": {
          "Empid": "3",
          "Name": "EFG"
        }
     {
        "_index": "priority_inx",
        "_type": "doc",
        "_id": "4",
        "_score": 1,
        "_source": {
          "Empid": "4",
          "Name": "LMN"
        }
     {
        "_index": "priority_inx",
        "_type": "doc",
        "_id": "5",
        "_score": 1,
        "_source": {
          "Empid": "5",
          "Name": "XYZ"
        }

Next, we take new snapshot into snapshot-2 within same repository.

Now, my concern is what happen in background, since records with EmpId 1 and 3 are as it is in both snapshots so are they again restore into snapshot-2? or as per your statement "it use as much data from previous snapshot" then will it take these two records from snapshot-1 when we will try to restore snapshot-2?

if it stored all records into snapshot-2 then is it possible to delete snapshot-1 since with time repository size will be increase if it keep same records multiple times?