Logstash.conf filter doesn't work

Hello,
I want to send events from my .log file with filebeat. I have created such filter but it doesn't work.

input {
    beats {
        port => "5044"
    }
}

filter {
		grok {
			match => { "message" => "(%{TIMESTAMP_ISO8601:m01datetime})?,(%{IPORHOST:m02clientip})?,(%{IPORHOST:m03clienthostname})?,(%{GREEDYDATA:m27customdata})?"}
			}
}

output
{
    elasticsearch
    {
        hosts => ["192.168.1.70:9200"]
		index => "%{[@metadata][beat]}-%{[@metadata][version]}%{+YYYY.MM.dd}"
    }
}

Because events aren't send to logstash I can't create index pattern. Could you help me to find what and where is wrong? Filebeat is 7.9.1.

What do the logstash logs show?

logstash.log shows only these:

[2020-09-27T15:27:35,248][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2020-09-27T15:27:44,546][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2020-09-27T15:27:44,609][INFO ][logstash.runner          ] Logstash shut down.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-4050/jruby18043681046467747404jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /opt/bitnami/logstash/logs which is now configured via log4j2.properties
[2020-09-27T15:28:09,954][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.9.1", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-09-27T15:28:10,732][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-09-27T15:28:12,941][INFO ][org.reflections.Reflections] Reflections took 41 ms to scan 1 urls, producing 22 keys and 45 values 
[2020-09-27T15:28:15,306][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2020-09-27T15:28:15,583][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2020-09-27T15:28:15,687][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-09-27T15:28:15,692][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2020-09-27T15:28:15,744][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2020-09-27T15:28:15,841][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2020-09-27T15:28:15,907][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/opt/bitnami/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x79e25268 run>"}
[2020-09-27T15:28:15,956][INFO ][logstash.outputs.elasticsearch][main] Index Lifecycle Management is set to 'auto', but will be disabled - Index Lifecycle management is not installed on your Elasticsearch cluster
[2020-09-27T15:28:15,969][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2020-09-27T15:28:17,313][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.4}
[2020-09-27T15:28:17,479][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-09-27T15:28:17,696][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-09-27T15:28:17,715][INFO ][logstash.inputs.http     ][main][5083493e3dbed449e5308df0f0abc1b7d0c4f9f09273ba9ea6ea05f6fd287662] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
[2020-09-27T15:28:17,731][INFO ][logstash.inputs.tcp      ][main][6e694d0b0079e878522bbc3601be5147a77aca4082a38c2de6eb9b79ec763c93] Starting tcp input listener {:address=>"0.0.0.0:5010", :ssl_enable=>"false"}
[2020-09-27T15:28:17,776][INFO ][org.logstash.beats.Server][main][d152f90f409dcc7176e034d7be5615c3ad67348ae3798cececd60452f104d563] Starting server on port: 5044
[2020-09-27T15:28:17,802][INFO ][logstash.inputs.gelf     ][main][9ec137ba7a3752fb62301ea1210ee81e8bc9b6468f016362d5098eec151fd30b] Starting gelf listener (udp) ... {:address=>"0.0.0.0:12201"}
[2020-09-27T15:28:17,849][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-09-27T15:28:17,891][INFO ][logstash.inputs.udp      ][main][f194204f110697dd382865a273d24418a54df4f87ed4bfca0604f0c339e5cd7c] Starting UDP listener {:address=>"0.0.0.0:5000"}
[2020-09-27T15:28:18,065][INFO ][logstash.inputs.udp      ][main][f194204f110697dd382865a273d24418a54df4f87ed4bfca0604f0c339e5cd7c] UDP listener started {:address=>"0.0.0.0:5000", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2020-09-27T15:28:18,282][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

OK, so logstash does not appear to be having issues. What do the filebeat logs show?

Although logstash looks good I don't have a possibility to choose these fields m01datetime, m02clientip, m03clienthostname, m27customdata in kibana, so, where is the problem?

Are the events reaching elasticsearch at all? If so, do they have a _grokparsefailure tag? If so, what does a sample [message] field look like?

also I discovered that, when I send events with filebeat directly to elasticsearch they are shown like below, there are 10 hits

when I send events to logstash there is only 1 hit, it looks like, the last event replace previous one.

Event send to elasticsearch has one field message, event send to logstash should be splitted into 4 parts m01datetime, m02clientip, m03clienthostname, m27customdata but it isn't. Where is a mistake?

Where should I use it a _grokparsefailure tag?

Without seeing the value of the [message] field it is impossible to say.

If the grok filter is failing to parse the [message] then I would expect it to be adding "_grokparsefailure" to the [tags] field.

message looks like:
2020-09-23T11:41:09.613Z,192.168.100.100,xyz.bigfirm.local, bla bla bla

I wanted to see sth like this
m01datetime 2020-09-23T11:41:09.613Z
m02clientip 192.168.100.100
m03clienthostname xyz.bigfirm.local
m27customdata bla bla bla

If I run logstash with this configuration

input { generator { count => 1 lines => [ '2020-09-23T11:41:09.613Z,192.168.100.100,xyz.bigfirm.local, bla bla bla' ] } }
filter {
    grok {
        match => { "message" => "(%{TIMESTAMP_ISO8601:m01datetime})?,(%{IPORHOST:m02clientip})?,(%{IPORHOST:m03clienthostname})?,(%{GREEDYDATA:m27customdata})?"}
    }
}
output  { stdout { codec => rubydebug { metadata => false } } }

I get

"m03clienthostname" => "xyz.bigfirm.local",
    "m27customdata" => " bla bla bla",
      "m01datetime" => "2020-09-23T11:41:09.613Z",
      "m02clientip" => "192.168.100.100",

so I am not sure what your problem is could be.

there are some default fields, a list is long, this is a piece,

kibana-03

there is a field called message, but why I can't see on that list my fields?

Hello,
I'm trying to understand why my fields

that I created in grok filter don't appear in Kibana. I try to find in the Internet but ...
Maybe you have an idea how to fix it?

When I use stdout { codec => rubydebug } in logstash.conf how can I see or check the result?

logstash will write the event to stdout in the format I showed

log file looks like this:

#Fields: date-time,client-ip,client-hostname,custom-data
2020-09-30T04:10:29.755Z,192.168.100.1,xyz.bigfirm.local,bla bla bla

grok filter looks like this:

filter {
		grok {
			match => { "message" => "(%{TIMESTAMP_ISO8601:m01datetime})?,(%{IPORHOST:m02clientip})?,(%{IPORHOST:m03clienthostname})?,(%{GREEDYDATA:m27customdata})?"}
			}
}

still don't know why these fields don't appear in Kibana

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.