Having issues parsing json

I am seeing _jsonparsefailure when using json codec in logstash.

   "message" => "%{\"node_id_str\":\"xxxxxxx\",\"subscription_id_str\":\"Sub1\",\"encoding_path\":\"Cisco-IOS-XR-fib-common-oper:fib/nodes/node/protocols/protocol/vrfs/vrf/summary\",\"collection_id\":\"111078\",\"collection_start_time\":\"1559252578037\",\"msg_timestamp\":\"1559252578821\",\"data_json\":[{\"timestamp\":\"1559252578820\",\"keys\":[{\"node-name\":\"0/6/CPU1\"},{\"protocol-name\":\"ipv4\"},{\"vrf-name\":\"default\"}],\"content\":{\"pr

logstash config :

input {
tcp {
port => 57500
codec => json
}
}

output {

stdout {
codec => rubydebug
}
}

That message is not valid JSON. The % at the start of [message] needs to be removed, and it appears to be truncated. You could use mutate+gsub to remove the %. Hard to tell what would fix the truncation, possibly a multiline codec, but that would then require at least one more gsub to remove the newlines.

Thanks for the info. I will give that a try.

I tried this filter:

filter {
mutate {
gsub => [ "fieldname", "%", " " ]
}
}

Logstash fails to parse the config and would not start!

[INFO ] 2019-05-31 14:03:22.115 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.1.1"}
[ERROR] 2019-05-31 14:03:23.761 [Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 12, column 20 (byte 252) after input {\n#via TCP encoded as JSON on port 57500 - \n tcp {\n port => 57500\n codec => json\n } \n tcp {\n port => 5432\n codec => json\n }\n filter {\n mutate ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:ininitialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:inblock in converge_state'"]}
[INFO ] 2019-05-31 14:03:24.239 [LogStash::Runner] runner - Logstash shut down.

You are missing a } to close the input {} section.

Hi,

This is my config file:

input {
tcp {
port => 57500
codec => json
}
filter {
mutate {
gsub => ["fieldname", "%", " "]
}
}

}
output {
stdout {
codec => rubydebug
}
}

Error:

[INFO ] 2019-05-31 14:46:20.084 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.1.1"}
[ERROR] 2019-05-31 14:46:21.709 [Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, => at line 7, column 19 (byte 96) after input {\n tcp {\n port => 57500\n codec => json\n } \n filter {\n mutate ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:incompile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2577:inmap'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:ininitialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:23:ininitialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:325:inblock in converge_state'"]}
[INFO ] 2019-05-31 14:46:22.201 [LogStash::Runner] runner - Logstash shut down.

As I said, you are missing a } to close the input section. Change this to

input {
    tcp {
        port => 57500
        codec => json
    }
}
filter {

Hi,

I was thinking that the filter should be a part of the input section. Anyways, moving the filter section out of the input section in the config file seem to solve this issue. Thanks for that.

I still see the json parse error with the filter setup as below:

      "tags" => [
    [0] "_jsonparsefailure"
]

filter {

mutate {

gsub => ["fieldname", "%", ""]

auto_flush_interval => 5

}

}

Another issue I noticed is that logstash does not seem to be doing anything until I hit ctrl + c. I tried the --pipeline.unsafe_shutdown option but it does not help. More importantly, I noticed none of this tcp json data is showing up in the kibana so not sure if its getting into elasticsearch or not.

If there is jsonparse failure in logstash will the data not get published into elasticsearch?

How do I check what data is getting into elasticsearch from the cli?

My output plugin configuration is below:

output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}

The error I see after hitting ctrl+c is below:

\"throttled-packets-received\":0,\"parity-packets-received\":0,\"unknown-protocol-packets-received\":0,\"input-errors\":0,\"crc-errors\":0,\"input-overruns\":0,\"framing-errors-received\":0,\"input-ignored-packets\":0,\"input-aborts\":0,\"output-errors\":0,\"output-underruns\":0,\"output-buffer-failures\":0,\"output-buffers-swapped-out\":0,\"applique\":0,\"resets\":0,\"carrier-transitions\":0,\"availability-flag\":0,\"last-data-time\":1559568412,\"seconds-since-last-clear-counters\":0,\"last-discontinuity-time\":1554322252,\"seconds-since-packet-received\":4294967295,\"seconds-since-packet-sent\":4294967295}}],\"collection_end_time\":\"1559568413117\"}",
"@timestamp" => 2019-06-03T13:26:55.294Z,
"tags" => [
[0] "_jsonparsefailure"
],
"host" => “xxxxxxxxx.”,
"@version" => "1"
}
[WARN ] 2019-06-03 13:26:57.096 [Ruby-0-Thread-18: :1] runner - Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause
data loss.
[WARN ] 2019-06-03 13:26:57.465 [Ruby-0-Thread-20: :1] ShutdownWatcherExt - {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>24, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in block in start_workers'"}, {"thread_id"=>25, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:inblock in start_workers'"}, {"thread_id"=>26, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in block in start_workers'"}, {"thread_id"=>27, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:inblock in start_workers'"}, {"thread_id"=>28, "name"=>"[main]>worker4", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in block in start_workers'"}, {"thread_id"=>29, "name"=>"[main]>worker5", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:inblock in start_workers'"}, {"thread_id"=>30, "name"=>"[main]>worker6", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in block in start_workers'"}, {"thread_id"=>31, "name"=>"[main]>worker7", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:inblock in start_workers'"}]}}
[ERROR] 2019-06-03 13:26:57.471 [Ruby-0-Thread-20: :1] ShutdownWatcherExt - The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[INFO ] 2019-06-03 13:26:57.646 [Converge PipelineAction::Stop] javapipeline - Pipeline terminated {"pipeline.id"=>"main"}
[INFO ] 2019-06-03 13:26:57.653 [LogStash::Runner] runner - Logstash shut down.

So is there any setting that I can use in the config file to make logstash output to elasticsearch as and when new inputs are received on the pipeline?

That is the normal mode of operation. A tcp input reads lines of text. What are you using to send lines of text to port 57500. Also, your elasticsearch index name references fields in [@metadata] -- what makes you think those fields will exist?

Not sure I understand your question. The problem I have is tcp data does not seem to go to through the stack until ctrl+c is issued. I have syslog data (udp) going through the pipeline fine as soon as it is received. How do I get around this problem?

That could be that there are no lines in the input data. What are you using to send data to the tcp input?

The tcp input is a stream of json data. I am removing some spurious chars in it before making it json in the pipeline. How to find out what the pipeline is doing with the data received and why its not forwarding it to the rubydebug output or the elasticsearch output?

Hi,

I tried some things today and observed this: If I use multiline codec in the input, I see the output on rubydebug and kibana. But if I use json or json_lines or no codec at all in the tcp input - the pipeline is just collecting stuff but no output :frowning: until I hit ctrl + c.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.