Translate Plugin in Logstash not working

I am using Logstash V5.2 . Using ELK for device syslog Analytics . I have requirement in which if a source of device (Ip address) sending syslog matches one column of the csv file , i need to add a field which will be the department of the device ..Can it be done using translate plugin in logstash

Details are below:
CSV file content `

host Dept 1.1.1.1 Finance 2.2.2.2 HR 3.3.3.3 Sales

Translate plugin Usage inside grok Filter (Not working)

if [host] =~ /^\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}$/ {
translate {
dictionary_path => "/etc/logstash/devices.csv"
destination => "Dept"
field => "host"
add_field => { %{host_group} => %{"Dept"} }
}
}

Can anyone advice on this pleasee

Please show an example of a document that didn't get the expected fields. Copy/paste the raw event from Kibana's JSON tab or use a stdout { codec => rubydebug } output. Also, what does devices.csv look like?

Apologies for delay in replying ..

Below is the grok ..i use in my UAT to test translate


###SRX
input {
udp {
port => "6514"
type => "SRX"
host => "X.X.X.X"
}

tcp {
port => "6514"
type => "SRX"
host => "X.X.X.X"
}
}

filter {

if [type] == "SRX" {

syslog_pri {}

if [host] =~ /^\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}$/ {
translate {
dictionary_path => "/etc/logstash/devices.csv"
destination => "tower"
field => "host"
add_field => {"%{host_group}" =>"%{tower}" }
}
}

if [type] == "SRX" {
    mutate {
            add_tag => [ "SRX" ]
           }
            }
          } #filter

      }

output {

Something went wrong with the grok parsing, don't discard the messages though

if "_grokparsefailure" in [tags] {
file {
codec => rubydebug
path => "/var/tmp/fail-%{type}-%{+YYYY.MM.dd}.log"
}
}

The message was parsed correctly, and should be sent to elasicsearch.

file {
codec => rubydebug
path => "/var/tmp/%{type}-%{+YYYY.MM.dd}.log"
}

elasticsearch {
ssl => true
ssl_certificate_verification => false
hosts => ["localhost:9200"]
user => XXXXX
password => XXXXX

}
}


Output is as below


{
"@timestamp" => 2017-10-19T02:23:51.676Z,
"syslog_severity_code" => 5,
"syslog_facility" => "user-level",
"@version" => "1",
"host" => "10.91.41.68",
"syslog_facility_code" => 1,
"message" => "<134>1 2017-10-19T10:20:33.805+08:00 abcuatfw01 RT_FLOW - RT_FLOW_SESSION_CREATE [junos@2636.1.1.1.2.40 source-address="X.X.X.X" source-port="123" destination-address="X.X.X.X" destination-port="123" service-name="junos-ntp" nat-source-address="10.115.10.221" nat-source-port="123" nat-destination-address="X.X.X.X" nat-destination-port="123" src-nat-rule-type="N/A" src-nat-rule-name="N/A" dst-nat-rule-type="N/A" dst-nat-rule-name="N/A" protocol-id="17" policy-name="AWS-to-INFOBLOX" source-zone-name="untrust" destination-zone-name="trust" session-id-32="240875" username="N/A" roles="N/A" packet-incoming-interface="ge-2/0/0.0" application="UNKNOWN" nested-application="UNKNOWN" encrypted="UNKNOWN"] session created 10.115.10.221/123->X.X.X.X/123 junos-ntpX.X.X.X123->X.X.X.X/123 N/A N/A N/A N/A 17 AWS-to-INFOBLOX untrust trust 240875 N/A(N/A) ge-2/0/0.0 UNKNOWN UNKNOWN UNKNOWN",
"type" => "SRX",
"syslog_severity" => "notice",
"tags" => [
[0] "SRX",
[1] "_grokparsefailure"
]
}


There's no grok filter in the configuration you posted.

This is UAT system and so didnt put a grok ..

But host parameter is present in the output but its not translated as per translate plugin & not having the field added ...

""host" => "10.91.41.68",

Please review your setting.

First,

According to the docuement, dictionary file should be in YAML , JSON , or CSV

Your dictionary file should be something like,

1.1.1.1,Finance,2.2.2.2,HR,3.3.3.3,Sales

Secondly,

logstash uses field name host to store its hostname of the server that is installed on . So it would be better to use some other field name to compare with dictionary file.

This is UAT system and so didnt put a grok ..

So why are you getting a _grokparsefailure tag?

Your dictionary file should be something like,

1.1.1.1,Finance,2.2.2.2,HR,3.3.3.3,Sales

Make that:

1.1.1.1,Finance
2.2.2.2,HR
3.3.3.3,Sales

@magnusbaeck
Thanks for the comment.

Thanks @YuWatanabe & @magnusbaeck . Let me test and get back to you ..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.