HI Team,
I'm using translate filter to match value with yaml, but those translate is doesnt happen,
Conf:
translate {
field => "dcn_id"
destination => "restricted_data"
override => true
dictionary_path => "C:\Ganesh\ELK\Latest\check.yaml"
}
Yaml:(I have tried with various combination)
0201912617621990C: Yes
"0201912617621990C": "Yes"
"0201912617621990C": Yes
Example:
dcn_id:0201912617621990C
0201912617621990C: Yes
gets me
"restricted_data" => true,
Can you try using forward slash instead of backslash in dictionary_path?
where i have to add this line ?
inside of translate filter
No, that's the output of a rubydebug codec after the translate filter has executed and done the lookup.
This is my full config file
input {
beats{
port => 5047
}
}
filter{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:resource}\] \[?(?<loglevel>[a-zA-Z ]+)\] \[DCN %{DATA:dcn_id}\] %{DATA:info} - ?(?<description>[a-zA-Z0-9\n -`!@#$%^&*':\".,(){}\[\]~]+)" }
}
grok {
match => { "description" => "<cts:GroupNumber>%{DATA:grp_id}<" }
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => ["timestamp"]
}
if![restricted_data]
{
translate {
field => "dcn_id"
destination => "restricted_data"
override => true
dictionary_path => "C:/Ganesh/ELK/Latest/check.yaml"
}
}
if [grp_id]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "restricted_data"
query => "type:restricted AND grp_number:%{[grp_id]}"
fields => { "restricted_status" => "restricted_data" }
}
mutate{
add_field => {"test" => "%{dcn_id}: Yes"}
}
}
}
output {
if "test" in [tags]{
elasticsearch {
hosts => ["http://localhost:9200"]
index => "test_log"
}
if [grp_id]
{
file {
codec => line { format => "%{test}"}
path => "C:\Ganesh\ELK\Latest\check.yaml"
}
}
}
}
Can you change the codec on your file output to be rubydebug and show us what an event looks like?
Please find the stdout result for your reference
{
"source" => "C:\\Ganesh\\ELK\\Latest\\hsbclogs\\VendoreAdj.log",
"timestamp" => 2019-05-06T17:11:09.184Z,
"input" => {
"type" => "log"
},
"resource" => "xxxx: 4",
"log" => {
"file" => {
"path" => "C:\\Ganesh\\ELK\\Latest\\hsbclogs\\VendoreAdj.log"
}
},
"@version" => "1",
"@timestamp" => 2019-05-13T14:38:17.256Z,
"tags" => [
[0] "test",
[1] "beats_input_codec_plain_applied",
[2] "_grokparsefailure"
],
"info" => "MessageSenderTemplate",
"loglevel" => "DEBUG",
"beat" => {
"name" => "xxx",
"hostname" => "xxx",
"version" => "6.7.0"
},
"message" => "2019-05-06 22:41:09.184 [ResourceAdapter : 4] [DEBUG] [DCN 0201912617621990C] MessageSenderTemplate - ResponseProducer Generated Message:",
"offset" => 102299,
"prospector" => {
"type" => "log"
},
"host" => {
"architecture" => "x86_64",
"os" => {
"family" => "windows",
"platform" => "windows",
"name" => "Windows 10 Enterprise",
"version" => "10.0",
"build" => "17763.437"
},
"id" => "0245ced2-6c59-41aa-9f75-7a2bd7aadfed",
"name" => "xxx"
},
"dcn_id" => "0201912617621990C",
"description" => "ResponseProducer Generated Message:"
}
When I run that message through that configuration I do get the restricted_data field added to the message. Are you sure you have a matching entry in check.yaml?
yes i've match field in my yaml file.
My concept is in my message i ll get one grp id once if i find the grp id i write the dcn value into my yaml like this,
0201912617621990C: Yes
after that when ever this dcn id comes in my log , i want to write yes value into restricted_data field using translate condition but it doesnt happen why
Badger
May 14, 2019, 2:06pm
11
The problem is that you read the dictionary, then process events and update the dictionary. Due to batching and caching by both ruby and the filesystem, there may be a considerable delay in updating the dictionary. I think you need an in-memory persistent cache, and that can be done using an aggregate filter.
This might work
aggregate {
task_id => "%{dcn_id}"
code => '
if ! map["seen"]
map["seen"] = true
else
event.set("restrictedData", true)
end
'
aggregate_maps_path => "/home/user/foo.maps"
timeout => 3600 # Expire entries after 1 hour
}
Note that you must have '--pipeline.workers 1' set for whichever pipeline runs this.
Am i using correct in my conf
input {
beats{
port => 5047
}
}
filter{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:resource}\] \[?(?<loglevel>[a-zA-Z ]+)\] \[DCN %{DATA:dcn_id}\] %{DATA:info} - ?(?<description>[a-zA-Z0-9\n -`!@#$%^&*':\".,(){}\[\]~]+)" }
}
grok {
match => { "description" => "<cts:GroupNumber>%{DATA:grp_id}<" }
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => ["timestamp"]
}
if![restricted_data]
{
aggregate {
task_id => "%{dcn_id}"
code => '
if ! map["seen"]
map["seen"] = true
else
event.set("restrictedData", true)
end
'
aggregate_maps_path => "/home/user/foo.maps"
timeout => 3600 # Expire entries after 1 hour
}
}
if [grp_id]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "restricted_data"
query => "type:restricted AND grp_number:%{[grp_id]}"
fields => { "restricted_status" => "restricted_data" }
}
mutate{
add_field => {"test" => "%{dcn_id}: Success"}
}
mutate{
gsub => ["test", "[\\]", ""]
}
}
}
output {
if "test" in [tags]{
elasticsearch {
hosts => ["http://localhost:9200"]
index => "test_log"
}
if [grp_id]
{
file {
codec => line { format => "%{test}"}
path => "C:\Ganesh\ELK\Latest\check.yaml"
}
}
}
stdout{
codec => rubydebug
}
}
i'm using above configuration, i'm getting aggregation exception
system
(system)
Closed
June 11, 2019, 3:15pm
15
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.