Hi All ,
I am using ruby filter in logstash . Xpack is enabled in my local machine. I am not able to put username and password in ruby filter like i am putting after index name(elastic:elastic). Hence, my ruby filter lookup is not working. Can you please help in putting username and password in filter ?
Please find below logstash configuration :
input{
beats{
port => 5044
}
}
filter
{
grok {
match => { "message" => ["(?<te>(.|\r|\n){1,20})from %{USERNAME:From_Table}%{SPACE}\s(to)%{DATA:To_Table}\:(.|\r|\n){1,60}\***\s(since)\s%{TIMESTAMP_ISO8601:Activity_Since}\***\W+\Total\s%{WORD:insertfield}\s*%{NUMBER:insertcount}\W+\Total\s%{WORD:updatefield}\s*%{NUMBER:updatecount}\W+\Total\s%{WORD:deletefield}\s*%{NUMBER:deletecount}",
"(?<te>(.|\r|\n){1,20})from\s\$%{USERNAME:From_Table}%{SPACE}\s(to)%{DATA:To_Table}\:(.|\r|\n){1,60}\***\s(since)\s%{TIMESTAMP_ISO8601:Activity_Since}\***\W+\Total\s%{WORD:insertfield}\s*%{NUMBER:insertcount}\W+\Total\s%{WORD:updatefield}\s*%{NUMBER:updatecount}\W+\Total\s%{WORD:deletefield}\s*%{NUMBER:deletecount}",
"Sending STATS request to %{WORD:process_type}%{SPACE}%{WORD:process_name}(.|\r|\n){1,60}\s*(at) %{TIMESTAMP_ISO8601:start_of_statistics}",
"from %{USERNAME:From_Table}%{SPACE}\s(to)%{DATA:To_Table}\:(.|\r|\n){1,60}\***\s(since)\s%{TIMESTAMP_ISO8601:Activity_Since}\***\W+%{GREEDYDATA:Description}"
]
}
}
if [From_Table] {
ruby {
init => 'require "net/http"
require "uri"'
code => 'adq = event.get("From_Table")
#puts adq #to print data to console
uri = URI.parse["http://localhost:9200/dellnew/doc/1001"]
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
#puts response.code #to print data to console
if response.code == "200"
result = JSON.parse(response.body)
process_name = result["_source"]["process_name"]
process_type = result["_source"]["process_type"]
start_of_statistics = result["_source"]["start_of_statistics"]
event.set("process_name", process_name)
event.set("process_type", process_type)
event.set("start_of_statistics", start_of_statistics)
end
'
}
}
if [Description] == "No database operations have been performed." {
mutate {
add_field => { "insertcount" => "0.00" }
add_field => { "deletecount" => "0.00" }
add_field => { "updatecount" => "0.00" }
add_field => { "insertfield" => "inserts" }
add_field => { "updatefield" => "updates" }
add_field => { "deletefield" => "deletes" }
}
}
mutate {
add_field => { "%{insertfield}" => "%{insertcount}" }
add_field => { "%{updatefield}" => "%{updatecount}" }
add_field => { "%{deletefield}" => "%{deletecount}" }
}
if "_grokparsefailure" in [tags] {
drop { }
}
mutate {
remove_field => ["@version","@timestamp","path","beat","prospector","source","offset","tags","multiline","os","architecture","name","host","agent","log","ecs","insertcount","deletecount","updatecount","input","insertfield","updatefield","deletefield"]
}
}
output
{
stdout
{
codec=>rubydebug
}
if [From_Table] {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "wipronew"
user => "elastic"
password => "elastic"
}
}
if ![From_Table] {
if [process_type] {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "dellnew"
user => "elastic"
password => "elastic"
document_type => "doc"
document_id => "1001"
}
}
}
}