Json::ParserError: Unrecognized token

//////////////////////////////////THIS PART I SOLVED MYSELF ///////////////////////////////////////
I am still getting a broken token within my logstash output, and I do not know where this timestamp (possibly) token comes from. Could someone offer some help?

here is my logstash error message:

[2020-01-14T16:11:31,022][ERROR][logstash.codecs.json ][main] JSON parse error, original data now in message field {:error=>#<LogStash::Json::ParserError: Unrecognized token 'mestamp': was expecting ('true', 'false' or 'null')

Here is what I am sending to logstash via python script:
LOGGER = logging.getLogger('python-logstash-logger')
LOGGER.setLevel(logging.DEBUG)

LOGGER.addHandler(logstash.LogstashHandler(127.0.0.1, 5000, version=1))

print(dir(logstash))
LOGGER.addHandler(logstash.TCPLogstashHandler('127.0.0.1', 5000, version=1))
LOGGER.error('python-logstash: test logstash error message.')
LOGGER.info('python-logstash: test logstash info message.')
LOGGER.warning('python-logstash: test logstash warning message.')

add extra field to logstash message

extra = {
'test_string': 'python version: ' + repr(sys.version_info),
'test_boolean': True,
'test_dict': {'a': 1, 'b': 'c'},
'test_float': 1.23,
'test_integer': 1238888888,
'test_list': [1, 2, '3'],
}

LOGGER.info("python-logstash: test extra fields", extra=extra)

and here is what I see in kibana (as you can see all the fields land in the 'message' due to the error with that token):

HOW to fix that token???

I tried to filter it out with mutate, but the issue is still there:

filter {
mutate {
add_field => { "test_string" => "Python version 1" }
remove_field => {"timestamp"}
}
}

or

filter {
mutate {
add_field => { "test_string" => "Python version 1" }
remove_field => {"mestamp"}
}
}

//////////////////////////////////////////////"solution"///////////////////////////////////////////////////////////////////////OK I AM PAST THAT PROBLEM NOW, the reason was too long integer in one of the extra fields created in py file, namely: 'test_integer': 1238888888,

///////////////////////////////////////N E W P R O B L E M////////////////////////////////////////
the new issue is with the mapping, when i reduced the length of that 'test_integer' to 12, well yesterday all worked great, but today I am getting logstash error:

[2020-01-15T13:25:57,356][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"mylogstash", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x5c5416ed], :response=>{"index"=>{"_index"=>"mylogstash", "_type"=>"_doc", "_id"=>"dycqqW8BjlS_m8VD4TZl", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [test_list] of different type, current_type [text], merged_type [long]"}}}}

I suppose the mapping involved that too long integer already, or?
How to solve the current issue?
************ my logstash mapping is: *******************

{
  "mapping": {
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "@version": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "host": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "level": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "logger_name": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "message": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "path": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "port": {
        "type": "long"
      },
      "tags": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "test_boolean": {
        "type": "boolean"
      },
      "test_dict": {
        "properties": {
          "a": {
            "type": "long"
          },
          "b": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          }
        }
      },
      "test_float": {
        "type": "float"
      },
      "test_string": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "type": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      }
    }
  }
}

Can you provide full index mapping in markdown ? to make this readable.

Also i think you just have te re-create a new index with test_list of type list.

OK, I did that markdown.

What you mean would basically mean adding to mapping this??:

 "test_list": {
        "type": "list",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },

My thoughts are that if I got the error in my opinion connected to "mylogstash" index from logstash config file, I should somehow change the mapping in regards to that, no?

Here is my current logstash error:

[2020-01-15T13:58:15,340][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"mylogstash", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x15996dd7], :response=>{"index"=>{"_index"=>"mylogstash", "_type"=>"_doc", "_id"=>"gdZIqW8BgEpJs2Q7c-Od", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [test_list] of different type, current_type [text], merged_type [long]"}}}}

And my current logstash file:

input {
  tcp {
    port => 5000
    codec => json
  }
}

filter {
	mutate {
		add_field => { "test_string" => "Python version 1" }
	}
}

	
output {
	elasticsearch{
		hosts => ["localhost:9200"]
		index => "mylogstash"
	}	
  
}

Is this the source of my problem : index => "mylogstash" ?
Or at which step is that index assigned to _index with the wrong type?

All i can see is that you have a type error with the "test_list" field that apparently does not exist in your current mapping.

Try recreating the index with the field and type text

Reading the docs the list datatype does not exist :wink:

You are so enigmatic, Sir! :slight_smile:
Anyway, thank you.
I hope I should do it like so:

"test_list": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "text",
            "ignore_above": 256
          }
        }
      },

One more thing however is where exactly can I find the .json responsible for that mapping?
[I have a lot of mapping.json i the kibana plugin folder: .\kibana_elasticsearch\kibana-7.4.1-windows-x86_64\x-pack\legacy\plugins]

I do not have any console where I could get the mapping, as described here

my cmd where I start ES and Kibana are not writable, same for logstash (for the current tcp input).
Where is that json file and that console you talked about in the docs?

Thanks,

You're trying to edit the mappings.json ?

Index mappings cannot be edited/updated you have to create a new indices with "custom mapping"

In order to do this you have to query from kibana "Dev Tools"

and use the API

PUT new-example
{
  "mappings": {
    "properties": {
    	"@timestamp": {
        "type": "date" 
   [... Insert your mapping ...]
}

Then run the command using the button ( top right ) dont forget you have to delete the old one before.

To see the current mapping of a indices from the console :

GET example/_mapping

Ahaaa, now I understand! Thank you!

Still asking for the current mapping gives me this error:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [example]",
        "resource.type" : "index_or_alias",
        "resource.id" : "example",
        "index_uuid" : "_na_",
        "index" : "example"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [example]",
    "resource.type" : "index_or_alias",
    "resource.id" : "example",
    "index_uuid" : "_na_",
    "index" : "example"
  },
  "status" : 404
}

Although I can see the (different) mapping in the Kibana->IndexPatterns but I maybe I will try to grasp it myself from now on :slight_smile:

Anyway, you have helped me a tone!
BTW I love those Le gendarme se marie or Le Gendarme de St. Tropez, I did not know someone still does! :smiley:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.