Preprocessing of alerts to be send using httppost

I am using a tool called Kapacitor to send alerts using httppost method to elasticsearch. However, in one list I am getting both time and value column with datatypes date and float respectively.

This is causing exception - Illegal argument exception; cannot change from type long to date. Then I tried converting the time and value both to string; However the error failed to parse field of type float.

This is the alert -

{"time":"2022-08-24T06:00:00Z","duration":0,"level":"OK","data":{"series":[{"name":"kpi","tags":{"BID":"1017","_field":"key-field","cid":"labec17-1017","version":"6.0"},"columns":["time","_value"],"values":[["2022-08-24T06:00:00Z","100"]]}]},"previousLevel":"CRITICAL","recoverable":true}

Which approach I can use. I am new into elasticsearch, so far I have came to know about logstash and ingestion pipeline. However, using kapacitor I can write to file; but again I had to handle the file size. Regarding, ingestion pipeline it will take some to time for me to learn it. Can, ingestion pipeline be used here or logstash is the only way.

I tried to format the alert using kapacitor; but its still showing.

Thanks

Hi @Pratik1 Welcome to the community ...

Well you picked a hard problem to start with :slight_smile:

The problem comes from this section of the message

"columns":["time","_value"],"values":[["2022-08-24T06:00:00Z","100"]]

Arrays can be tricky in elasticsearch.... this could be very difficult to solve or if we understand the data perhaps there is a simple workaround..

The issue is that all elements of a field array have to of the same type....

When Elasticsearch sees this (with the default settings)
["2022-08-24T06:00:00Z","100"]

It recognizes the first element"2022-08-24T06:00:00Z" as a date type and sets the field type as date and then fails on the 2nd element "100" which is not a date.

That is what is happening, it is a result of using the default mapping that elasticsearch tries to provide for you ... in this case it can not handle the 2 conflicting types.

I am not sure what control you have over the data format coming in... perhaps that could be a fix.

In general is is best practice to create a mapping before you ingest data... to avoid these very types of problems you should probably read about mappings here

Probably the simplest "hack / work around" is to set the field mapping for the values to be of type keyword ... then the ingest will work but that date value will not be an actual date... we might be able to turn it into one with a runtime field ... I see you actually have a time field that will be of type date automatically..

Something like this...

PUT kapacitor-test <!--whatever index the data is being written too
{
  "mappings": {
    "properties": {
      "data": {
        "properties": {
          "series": {
            "properties": {
              "columns": {
                "type": "keyword"
              },
              "values": {
                "type": "keyword"
              }
            }
          }
        }
      }
    }
  }
}

There are probably other solutions but we need to understand. Are those columns and values always the same or the same number of them? Can you change your format etc etc.

Thanks a lot. It worked.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.