Unexpected json rejection

Hello! Please help me to find out is it a bug or I can fix some configs to solve errors.
I have ELK 7.9.2 installation.
I'm sending logs from python app to Logstash to store them in Elastc and visualaize them with Kibana.
logstash config is

input {
    tcp {
        port => 5000
        codec => "json"
        type => "service"
    }
output {
    if [type] == "service" {
        elasticsearch {
            hosts => ["myelastic:9200"]
            index => "XYZ-%{+YYYY.MM.dd}"
        }
}

JSON is

{
  "@timestamp": "2021-03-02T17:05:44.645Z",
  "@version": "1",
  "host": "pc",
  "level": "INFO",
  "logsource": "pc",
  "message": "127.0.0.1:40798 200",
  "pid": 119456,
  "program": "__main__.py",
  "type": "python-logstash",
  "tags": [
    "tag"
  ],
  "extra": {
    "func_name": "send",
    "interpreter": "python",
    "interpreter_version": "3.8.3",
    "line": 456,
    "logger_name": "logger",
    "logstash_async_version": "2.2.0",
    "path": "_.py",
    "process_name": "SpawnProcess-5",
    "thread_name": "MainThread",
    "status_code": 200,
    "scope": {
      "type": "http",
      "asgi": {
        "version": "3.0",
        "spec_version": "2.1"
      },
      "http_version": "1.1",
      "server": [
        "127.0.0.1",
        8000
      ],
      "client": [
        "127.0.0.1",
        40798
      ]
    }
  }
}

And I'm getting errors in Logstash each time I sent JSON like above.
ERROR:

"error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [extra.scope.client] cannot be changed from type [long] to [text]"

Then the message appears in the Elastic index, but not in Kibana.
But when I change

          "client": [
            "127.0.0.1",
            40798
          ]

to

          "client": [
            127.0.0.1,
            40798
          ]

removing quotes around 127.0.0.1 , then
JSON goes through the Logstash, appears in Elastic index and shows in Kibana. Even when the "server" part of this JSON stays unchanged.

BTW if I POST this JSON directly to elastic I'm getting the same "change type" error.

Please give me the idea what is wrong and what can I change/tune to allow JSON with any number of port in "clinet" part of it go through the Logstash to Elastic index with no errors and then to appear in Kibana.
Thanks

All values in an array must be the same type.

In Elasticsearch, there is no dedicated array data type. Any field can contain zero or more values by default, however, all values in the array must be of the same data type.

So you would need to either change both to a string or separate them in Logstash or your Python script if you need them to be different.

Changing to this and mapping as IP and long would be ideal.

{
    "client": {
        "ip": "127.0.0.1",
        "port": 40798
    }
}

Ok. But how can I tell to Elastic that 40798 is a string ?
I have tried to send

      "client": [
        "127.0.0.1",
        "40798"
      ]

with this config

input {
    tcp {
        port => 5000
        codec => "json"
        type => "service"
    }    
output {
        if [type] == "service" {
            elasticsearch {
                hosts => ["elasticsearch:9200"]
                index => "XYZ-%{+YYYY.MM.dd}"
            }
        } else {
            elasticsearch {
                hosts => ["elasticsearch:9200"]
                index => "OTHER-%{+YYYY.MM.dd}"
        }
        }
    }

And that follows me to get the message in the OTHER index. So I assume that means the JSON was wrong. But according json formater it's not.
So
How do I tell to the Elastic that my longint is text? (I used to show that by quotes to bash, python, etc.) Ho do I do that to Elastic/Logstash easyest way, not adding extra stuff like additional mapping? Is it possible?

Easiest way is to add this in the filter.

mutate {
 convert => {
  "[extra][scope][server][1]" => "string"
  "[extra][scope][client][1]" => "string"
 }
}

Thank you! I will try!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.