ElasticSearch loosing data

My architecture consists of Kafka logstash elasticsearch, i wrote a ruby code to calculate a KPI in logstash, everything was working fine but some days pass by and that KPI i made has been reset to 0 and only that KPI, i've also created some scripted fields in kibana i somehow messed them up and immediately got data loss when i was observing my saved search not everything but some of them, so i deleted those scripted fields and created proper ones that they where successful but after some time the same issue got back

this is my ruby code at the logstash level

mutate { add_field => { "[@metadata][task]" => "constant" } }
    aggregate {
        task_id => "%{[@metadata][task]}"
        code => '
            map["total"] ||=0
            t = event.get("[payload][type]")
            s = event.get("[payload][status]")
            if t == "topup" && s == "success"
                map["total"] += event.get("[payload][amount]")
            elsif t == "cashout" && s == "success"
                map["total"] -= event.get("[payload][amount]")
            end
            event.set("total", map["total"])
        '
    }

and the scripted fields where normal doc["fieldname"] i've just used it to rename some fields

What problem are you having with that aggregate filter?

@Badger I still don't know yet but after a while the total field gets reset to 0 i have to delete the index and index back again to have the correct value
ps: i'm using --pipeline.workers 1

I would only expect that to happen when logstash restarts.

@Badger I never restarted the server, i have a ec2 instance do you think when amazon automaticly backups the server it get restarted ? that is a slight chance because if it has been restarted all the other services will go down and will not automatically restart, and what about the data loss i'm facing ? is there something i can do so you can incarnate my problem ?

Is there a other way i can calculate that KPI probably in Elasticsearch ?