Json_errors in kibana 5.5


I have observed few logs are not updating properly and its showing a new fleid json_error in document details .

This never happend before and i have no clue why? whats the reason behind it.

Please help me solve it.

References :

This looks like it's happening upstream - can you share a little info about what's ingesting the logs? It looks like it's consistent with using the filebeat json parser on invalid json. The message field in the first event for example is missing a {"some_property": "C

Thanks @jbudz for the reply ,

We are using filebeat to read the logs from the file and then send logs to logstash to apply grok filter and then output is sent to Elastic search.

Can you please help understand where I went wrong and why this is happening ?


Can some one please help me solve it.

I believe we're cutting off the log at some point in between the file and elasticsearch. Is the file you're reading from output in json? Can you share the respective filebeat and logstash configurations?

@jbudz sorry for the late reply.

Please look into my configurations below and help me correct the configuration. If i did something wrong.


filebeat.registry_file: /var/log/containers/filebeat_registry

  • input_type: log
    • "/var/log/containers/*.log"
      exclude_files: ['filebeat1.*log','monitoring-influxdb.*log','influxdb.*log','weave-net.*log','esmaster.*log','esdata.*log','esclient.log','kubeproxy.log','kube.log','grafana.log','weave-npc.*log','weave.*log','weave-net.*log','jmxtrans.*log','kibana.*log','logstash.*log','kubernetes-dashboard.*log','qa.*log']
      symlinks: true
      json.message_key: log
      json.keys_under_root: true
      json.add_error_key: true
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
      document_type: kube-logs
      reload.enabled: true
      reload.period: 10s
      hosts: {LOGSTASH_HOSTS} timeout: 800 bulk_max_size: 100 logging.level: {LOG_LEVEL}


input {

beats {

port => 5000



filter {

if [type] == "kube-logs" {

mutate { rename => ["log", "message"] }

date {

  match => ["time", "ISO8601"]

  remove_field => ["time"]


grok {

  match => { "source" => "/var/log/containers/%{DATA:pod_name}_%{DATA:namespace}_%{GREEDYDATA:container_name}-%{DATA:container_id}.log" }

  remove_field => ["source"]


if [message] =~ /\d{15}/  {

   grok {
    match => ["message","%{TIMESTAMP_ISO8601:date}\*\[%{LOGLEVEL:log-level}\]\*%{DATA:thread}\*%{DATA:class}\*%{DATA:method}\*%{DATA:imei}\*%{DATA:token}\*%{GREEDYDATA:messagedata}"] 		
else {
   grok {
     match => ["message","%{TIMESTAMP_ISO8601:date}\*\[%{LOGLEVEL:log-level}\]\*%{DATA:thread}\*%{DATA:class}\*%{DATA:method}\*%{GREEDYDATA:messagedata}"] 	


output {

elasticsearch { hosts => ['http://localhost:9200']}


Okay - I would start with the filebeat configuration here. Using multiline regexes and json parsing sounds like a decent recipe for parse errors. Do you have an example log of one of the failing errors?

Thanks for the reply @jbudz ,

The requested failing logs are the following:

These logs are coming properly but the writing speed for these logs are so rapid.

Sometimes i am able get the log properly and few logs of these kind are getting into kibana with json_errors.

Please help me solve it.

2019-08-13 05:44:13:581*[DEBUG]*CoapServer#11*org.eclipse.leshan.core.node.codec.DefaultLwM2mNodeDecoder*decodeTimestampedDataDecoding value for path /2050/0 and format ContentFormat [name=TLV, code=11542]: [-120, 2, 28, 72, 0, 25, -123, 1, 5, 57, -8, -37, 27, 0, 0, 1, 108, -120, 110, 102, -6, -125, 0, 2, -65, 0, 67, 37, 3, 0, -1]
2019-08-13 05:40:46:720*[DEBUG]*CoapServer#23*org.eclipse.leshan.server.cluster.RedisSecurityStore*getByEndpoint*990009621258709**
2019-08-13 05:38:15:936*[DEBUG]*pool-3-thread-1*org.eclipse.leshan.server.cluster.CassandraRegistrationStore*run*990009624192038**client cleaning regId:kswXQvQvOG check isAlive:true
2019-08-13 05:38:15:936*[DEBUG]*CoapServer#9*org.eclipse.leshan.server.californium.impl.RegisterResource*handlePOSTPOST received : CON-POST   MID=47664, Token=30bae3e1, OptionSet={"Uri-Path":["rd","Xpjrs5wHN2"]}, no payload