Feed an index with python?

Hello! Noob here trying to do something.
Hope you are having a nice day!

I am trying to import data from a python script into an elasticsearch index in two parts. The first part index must receive all previous data, do a process with data (subtractions, multiplications and math divisions) and finish posting a the dataframe in elastic cloud. The second part is take all present data every so often (10 min), do the same proccess, and accumulate last data every time the python script finishes the second process, of course, without forgetting the first data to can graph it in kibana. Data is taken from Frebruary.

I'm using eland to get data from elastic, do a procces (with pandas) and using pandas_to_eland (eland documentation here) to send information to elastic cloud. Offcourse, the index have index pattern and pipeline. The problems are:

  1. For first part, although is taking all data (33k rows), index is not taking the timestamp line. In discover section shows that:


    The bartchart of time is not showing and I can't grpah last data in kibana.

  2. index is forgetting the new incoming data, cause when I run second part to send that last row of data, is not aggregated as new row.

Here is what I'm doing for first part

#Get data from elasticsearch
eland_data = ed.DataFrame(es, "index_name")  #elasticsearch client, index

#To pandas 
pandas_data = ed.eland_to_pandas(eland_data)

#=====================================================================
#Some proccess with pandas_data that gives data
#=====================================================================

df = ed.pandas_to_eland(
    pd_df=datos,
    es_client=es,

    # Where the data will live in Elasticsearch
    es_dest_index="nps_prueba_4",

    # Type overrides for certain columns, 'location' detected
    # automatically as 'keyword' but we want these interpreted as 'geo_point'.
    es_type_overrides={
        "@timestamp": "date",
        "kpi_latency_peso": "float",
        "kpi_local_latency_peso": "double",
        "kpi_local_jitter_peso": "double",
        "kpi_packet_loss_peso": "double",
        "bw_capacity_down_peso": "double",
        "bw_capacity_up_peso": "double",
        "dispo": "double",
        "system_score": "double"
    },

    # If the index already exists what should we do?
    es_if_exists="replace",

    # Wait for data to be indexed before returning
    es_refresh=True,
)

time.sleep(600) #10 min until next catch of data

The second part is similar but I'm taking just the last 2 rows of elastic dataframes to mean, do process and send it one row of data to the same index and this part runs forever. I'm usung -again- the same last process just editing the field "es_if_exists".

df = ed.pandas_to_eland(
    pd_df=datos,
    es_client=es,

    # Where the data will live in Elasticsearch
    es_dest_index="nps_prueba_4",

    # Type overrides for certain columns, 'location' detected
    # automatically as 'keyword' but we want these interpreted as 'geo_point'.
    es_type_overrides={
        "@timestamp": "date",
        "kpi_latency_peso": "float",
        "kpi_local_latency_peso": "double",
        "kpi_local_jitter_peso": "double",
        "kpi_packet_loss_peso": "double",
        "bw_capacity_down_peso": "double",
        "bw_capacity_up_peso": "double",
        "dispo": "double",
        "system_score": "double"
    },

    # If the index already exists what should we do?
    es_if_exists="append",

    # Wait for data to be indexed before returning
    es_refresh=True,
)
time.sleep(600) #10 min until next catch of data

I don't know what I'm doing worng.