Need help on grok filter

Hi guys,

i would like to ask if what is the correct filter using grok syntax for below output

sample.out
hostname:server1.sgdc.company.net|ipaddress:1XX.XX.XXX.XX|status:ACTIVE
hostname:server2.sgdc.company.net|ipaddress:1YY.YY.YY.YY|status:ACTIVE

I tried this logstash.conf but failing.

input {
file {
type => "monitor1"
path => "/root/Documents/scripts/sample/sample.out"
start_position => "beginning"
sincedb_path => "/opt/logstash/.sincedb_sample"
}
}
filter {
if [type] == "monitor1" {
grok {
match => { "message" => "hostname:%{FQDN:fqdn_unparsed}|ipaddress:%{IPV4}|status:%{GREEDYDATA:status}" }
}
}
output {
if [type] == "monitor1" {
elasticsearch {
hosts => ["1XX.XX.XXX.XX:9200"]
index => "monitor1"
}
}

}

any help please.

Hey @retxedue,

As you mention above sample.out logs that all the logs pattern are same or different ?

If it is same then no need to write the grok if all the logs pattern are different then grok will works good.

My suggestion to you is use KV filter plugin and seprate the key and value like following example:

filter {
kv {
field_split => ":"
}

kv {
value_split => "|"
}
}

it will parse the all of your logs in key:value pair.

if you want to more elaborate in this then refer this link:

https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html

Thanks & Regards,
Krunal.

all are same. See below sample:

hostname:hou150lnx.hou150.domain.net|ipaddress:1XX.XX.XX.216|status:active
hostname:hou150lnxq1ep1dn1.hou150.domain.net|ipaddress:1XX.XX.XX.215|status:active
hostname:test150lnxq1ep1-secondary.hou150.domain.net|ipaddress:1XX.XX.XX.242|status:standby
hostname:testclnxq1ep2-secondary.domain2.domain.net|ipaddress:1YY.YYY.Y.38|status:standby
hostname:testpwglnxq1fp1.lonpwg.domain.net|ipaddress:1XX.XX.YYY.76|status:active
hostname:testpwglnxq1ep1-primary.domain3.domain.net|ipaddress:1XX.XX.YYY.79|status:active
hostname:testclnxq1dn2.domain2.domain2.net|ipaddress:1YY.YYY.Y.39|status:active
hostname:testlnxq1dn1.domain2.domain.net|ipaddress:1YY.YYY.Y.36|status:active
hostname:testctylnxq1efp1.tcoaty.domain.net|ipaddress:1XX.XX.XXX.30|status:active
hostname:domain3lnxq1dn1.domain3.domain.net|ipaddress:1XX.XX.XXX.81|status:active

I tried using grok constructor below query matched with my logs.
["hostname:[.(?[A-Za-z0-9_-]+.[A-Za-z0-9_-]+.net)$]|ipaddress:%{IP:ipaddress}|status:%{GREEDYDATA:status}"]

I tried adding it in my logstash.conf. it is failing can you please help how to add in filter?

this one is failing. What have i missed?
filter {
if [type] == "test" {
grok {
match => { "message" => "["hostname:[.(?[A-Za-z0-9_-]+.[A-Za-z0-9_-]+.net)$]|ipaddress:%{IP:ipaddress}|status:%{GREEDYDATA:status}"]" }
}
}
}

Hi,
you can use built-in HOSTNAME pattern to get fqdn.
Also "|" character needs escaping.

filter {
if [type] == "monitor1" {
grok {
match => { "message" => "hostname:%{HOSTNAME:fqdn_unparsed}\|ipaddress:%{IPV4:ip_addr}\|status:%{GREEDYDATA:status}" }
}
}
}

[2018-06-07T09:11:23,163][DEBUG][logstash.pipeline        ] output received {"event"=>{"status"=>"ACTIVE\r", "@version"=>"1", "message"=>"hostname:server2.sgdc.company.net|ipaddress:100.1.22.34|status:ACTIVE\r", "ip_addr"=>"100.1.22.34", "@timestamp"=>2018-06-07T06:11:22.864Z, "path"=>"C:\\ericsson\\development\\elk\\logstash\\sample.out", "host"=>"TR00200384", "fqdn_unparsed"=>"server2.sgdc.company.net", "type"=>"monitor1"}}
{
           "status" => "ACTIVE\r",
         "@version" => "1",
          "message" => "hostname:server2.sgdc.company.net|ipaddress:100.1.22.34|status:ACTIVE\r",
          "ip_addr" => "100.1.22.34",
       "@timestamp" => 2018-06-07T06:11:22.864Z,
             "path" => "C:\\development\\elk\\logstash\\sample.out",
             "host" => "TR00200384",
    "fqdn_unparsed" => "server2.sgdc.company.net",
             "type" => "monitor1"
}

I tried above pattern but still failing.

input {
file {
type => "monitor1"
path => "/root/Documents/scripts/monitor/monitor.out"
start_position => "beginning"
}
}
filter {
if [type] == "monitor1" {
grok {
match => { "message" => "hostname:%{HOSTNAME:fqdn_unparsed}|ipaddress:%{IPV4:ip_addr}|status:%{GREEDYDATA:status}" }
}
}
}
output {
if [type] == "monitor1" {
elasticsearch {
hosts => ["1XX.XX.XXX.XX:9200"]
index => "montor1"
}
}
}

here is the error:

[2018-06-07T14:32:16,733][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-06-07T14:32:16,737][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-06-07T14:32:17,012][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-06-07T14:32:17,085][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-07T14:32:17,457][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-07T14:32:17,644][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://1XX.XXX.XX.XX:9200/]}}
[2018-06-07T14:32:17,645][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://1XX.XXX.XX.XX:9200/, :path=>"/"}
[2018-06-07T14:32:17,704][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://1XX.XXX.XX.XX:9200/"}
[2018-06-07T14:32:17,728][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-06-07T14:32:17,728][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-06-07T14:32:17,729][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-06-07T14:32:17,731][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-07T14:32:17,736][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//1XX.XXX.XX.XX:9200"]}
[2018-06-07T14:32:17,888][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4bacaefe@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[2018-06-07T14:32:17,921][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}

Why i am suggesting you to use KV filter plugin its easy to parse the logs and grok is complex to write and all the things so i will suggest you to use kv filter plugin try once its working or not.

Thanks & Regards,
Krunal.

how to write it in kv? for my type of input file ?

may i see your whole logstash.conf here?

I don't see any error line on the log. All of them INFO.

Escaping still missing in your grok filter.

input {
file {
type => "monitor1"
path => "C:\development\elk\logstash\sample.out"
start_position => "beginning"
}
}
filter {
if [type] == "monitor1" {
grok {
match => { "message" => "hostname:%{HOSTNAME:fqdn_unparsed}\|ipaddress:%{IPV4:ip_addr}\|status:%{GREEDYDATA:status}" }
}
}
}
output {
    stdout { codec => rubydebug }
}

i tried using above config still i am seeing in logs are INFO. I also need to have an output to have index created so i can used for grafana dashboard.

@arkady - can you give sample config using KV?

Actually your problem is not related with filter plugin.
You should check output configuration in pipeline conf.
I wish i could help with grafana, but i dont have any knowledge about it.

this is what is in my pipeline.yml

  • pipeline.id: main
    path.config: "/etc/logstash/conf.d/*.conf"

I meant output configuration below.
Do you see any document in you elasticsearch index?

output {
if [type] == "monitor1" {
{
hosts => ["1XX.XX.XXX.XX:9200"]
index => "monitor1"
}
}
}

Here is the config. It is not creating any index

input {
file {
type => "monitor"
path => "/root/Documents/scripts/qradar/monitor.out"
start_position => "beginning"
}
}
filter {
if [type] == "monitor" {
grok {
match => { "message" => "hostname:%{GREEDYDATA:hostname}|ipaddress:%{IP:ipaddress}|status:%{WORD:status}" }
}
}
}
output {
if [type] == "monitor" {
elasticsearch {
hosts => ["1XX.XX.XX.XXX:9200"]
index => "monitor"
}
stdout { codec => rubydebug }
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.