111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 12:23pm
1
Use:
"logstash.version"=>"7.8.0"
kafka_2.13-2.5.0
Config:
input {
kafka {
bootstrap_servers => "ip:port"
auto_offset_reset => "earliest"
enable_auto_commit => true
topics => ["logs"]
codec => "json"
consumer_threads => 5
}
}
filter {
ruby { code => '
kubernetes = event.get("kubernetes")
if kubernetes["namespace_name"] != "ingress"
event.cancel
end
event.set("body_bytes_sent", event.get("bytes_sent"))
event.set("host", kubernetes["host"])
event.set("short_message", event.get("remote_addr"))
'}
mutate {
remove_field => ["kubernetes", "message"]
}
}
output {
stdout {codec => rubydebug}
}
I'm use fluentd in kubernetes -> kafka -> logstash -> elasticserach (want to use)
But i cannot setting logstash, alltime i have a WARN:
Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
I'm try use filter:
filter{
date {
match => [ "_@timestamp", "UNIX" ]
remove_field => "_@timestamp"
remove_tag => "_timestampparsefailure"
}
}
or
filter{
date {
match => [ "@timestamp", "UNIX" ] # or UNIX_MS
}
}
Jenni
August 3, 2020, 12:44pm
2
So you've got an input with an @timestamp
field that could not be converted into a Logstash Timestamp directly and was therefore renamed to _@timestamp
and tagged accordingly. Do you see any output nevertheless? Then we could check the content of _@timestamp
and find out how to parse it.
(Anyway your first filter is definitely closer to the solution than the second because parsing @timestamp
to get @timestamp
won't lead us anywhere :))
111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 12:50pm
3
filter {
date {
match => [ "_@timestamp", "UNIX_MS" ]
remove_tag => "_timestampparsefailure"
target => "new_date"
}
}
i have:
{
"host" => "k8s",
"body_bytes_sent" => nil,
"tags" => [],
"stream" => "stdout",
"short_message" => nil,
"_@timestamp" => 1596203686.618137,
"log" => "some_Text",
"@timestamp" => 2020-08-03T12:48:04.319Z,
"time" => "2020-07-31T13:54:46.618137462Z",
"@version" => "1",
"new_date" => 1970-01-19T11:23:23.686Z
}
and many many
[WARN ] 2020-08-03 12:48:04.603 [Ruby-0-Thread-39: :1] Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
KAFKA log:
{"@timestamp":1596459053.620321,"log":"IP - - [03/Aug/2020:12:50:53 +0000] \"GET /rest/quickreload/latest/25002339?since=1596456631322&_=1596459053529 HTTP/1.1\" 204 0 \"some_text" 4012 0.003 [ingress-wiki-service-80] [] IP:80 0 0.000 204 905a7deac0e375e1058059b1124f12f0\n","stream":"stdout","time":"2020-08-03T12:50:53.620321314Z","kubernetes":{"pod_name":"default-ingress-nginx-controller-6c97f4fd5-59t2q","namespace_name":"ingress","pod_id":"7b6a2a08-0218-4430-9463-13bbb547cf26","labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"default","app.kubernetes.io/name":"ingress-nginx","pod-template-hash":"6c97f4fd5"},"annotations":{"cni.projectcalico.org/podIP":"192.168.50.136/32"},"host":"k8s","container_name":"controller","docker_id":"5296f455b37576d074bb8f26d33ceb39962d9bedb4d67f32e288ff65968849ae","container_hash":"quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287","container_image":"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"}}
Jenni
August 3, 2020, 12:56pm
4
Hm. Strange. The first date filter in your original post (parsing _@timestamp
as UNIX
) should work just fine. What output did you get with that?
111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 1:01pm
5
output (use rubydebug):
{
"host" => "k8s",
"body_bytes_sent" => nil,
"tags" => [],
"stream" => "stdout",
"short_message" => nil,
"_@timestamp" => 1596203686.618137,
"log" => "some_Text",
"@timestamp" => 2020-08-03T12:48:04.319Z,
"time" => "2020-07-31T13:54:46.618137462Z",
"@version" => "1",
"new_date" => 1970-01-19T11:23:23.686
}
Jenni
August 3, 2020, 1:12pm
6
That seems to be exactly the same output as the text you had posted in your previous reply. That doesn't make sense. Is that really the output that I had asked for? The one for this config?
111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 1:33pm
7
{
"@timestamp" => 2020-07-31T13:46:53.169Z,
"time" => "2020-07-31T13:46:53.169695616Z",
"tags" => [],
"kubernetes" => {
"namespace_name" => "kube-system",
"labels" => {
"component" => "kube-controller-manager",
"tier" => "control-plane"
},
"host" => "k8s",
"container_name" => "kube-controller-manager",
"docker_id" => "0317afb6ae78053891ecc193039cdf8f3959f5b0d9d3685e019054d9973ae322",
"container_hash" => "k8s.gcr.io/kube-controller-manager@sha256:29f57d6d1e821e417a4dcef5a3669ab545530469e332420131387c7df3bec62f",
"container_image" => "k8s.gcr.io/kube-controller-manager:v1.17.4",
"pod_id" => "11c243ce-e2cf-46aa-9420-09d297b70799",
"pod_name" => "kube-controller-manager-k8s-master03d",
"annotations" => {
"kubernetes.io/config.hash" => "b4f2748568d79214c7f00ddecbd6f7b0",
"kubernetes.io/config.source" => "file",
"kubernetes.io/config.mirror" => "b4f2748568d79214c7f00ddecbd6f7b0",
"kubernetes.io/config.seen" => "2020-04-15T21:10:05.134099758Z"
}
},
"stream" => "stderr",
"@version" => "1",
"log" => "I0731 13:46:53.168125 1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"minio\", Name:\"export-test-minio-0\", UID:\"92bd597b-6de2-4b15-b9f2-a321ee495ead\", APIVersion:\"v1\", ResourceVersion:\"22905138\", FieldPath:\"\"}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set\n"
}
111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 1:59pm
9
no =(
many warn:
[WARN ] 2020-08-03 13:34:51.665 [Ruby-0-Thread-40: :1] Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
[WARN ] 2020-08-03 13:34:51.665 [Ruby-0-Thread-43: :1] Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
[WARN ] 2020-08-03 13:34:51.667 [Ruby-0-Thread-39: :1] Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
[WARN ] 2020-08-03 13:34:51.668 [Ruby-0-Thread-41: :1] Event - Unrecognized @timestamp value type=class org.jruby.RubyFloat
Jenni
August 3, 2020, 2:19pm
10
Warnings, not errors. As I understand it, the warnings occur at the input. Then you repair your data in your filter, so everything is actually fine in the end. But you can't prevent the warnings this way. But they don't hurt anyone, so this is actually acceptable.
If you want to get rid of those messages you'd either have to make sure that there is no @timestamp
field in the input data in the first place, or get rid of it before the JSON is parsed. To achieve the latter, you could maybe import the data as codec => "plain"
, replace the field name (mutate { gsub => [ "message", "@timestamp", "_@timestamp"] }
) and then use a JSON filter, so the parsing error cannot occur. (Of course you'd still have to use the date filter!) But that might be worse performance-wise.
I don't know any other solution.
1 Like
111266
(ΠΠ»Π΅ΠΊΡΠ°Π½Π΄Ρ ΠΡΡΠΏΠΈΠ½)
August 3, 2020, 4:08pm
11
a lot of thanks for help
input {
kafka {
bootstrap_servers => "kafka0:9092"
auto_offset_reset => "earliest"
enable_auto_commit => true
topics => ["k8s-logs"]
consumer_threads => 5
codec => "plain"
}
}
filter {
mutate { gsub => [ "message", "@timestamp", "_@timestamp"] }
json {source => "message"}
date {
match => [ "_@timestamp", "UNIX" ]
remove_field => "_@timestamp"
remove_tag => "_timestampparsefailure"
}
ruby { code => '
kubernetes = event.get("kubernetes")
if kubernetes["namespace_name"] != "ingress"
event.cancel
end
event.set("body_bytes_sent", event.get("bytes_sent"))
event.set("host", kubernetes["host"])
event.set("short_message", event.get("remote_addr"))
'}
mutate {
remove_field => ["kubernetes", "message", "time", "@version", "stream"]
}
}
output {
stdout {codec => rubydebug}
}
system
(system)
Closed
August 31, 2020, 4:08pm
12
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.