Logstash 7.1.1 grokparsefailure despite match pattern

I have upgraded logstash to 7.1.1 and I am receiving a grokparsefailure despite my pattern matching. See below.

{"log":{"file":{"path":"/var/log/cassandra/system.log"},"offset":18875190},"input":{"type":"log"},"tags":["cassandra_log","b-gp2-cassdb2-5","beats_input_codec_plain_applied","_grokparsefailure"],"ecs":{"version":"1.0.0"},"agent":{"id":"dfbddef6-0564-4d06-9ec2-d857fc46d2a3","version":"7.1.1","hostname":"b-gp2-cassdb2-5","type":"filebeat","ephemeral_id":"7b292ba5-248c-4d94-a916-07bfee432a3c"},"message":"INFO [RMI TCP Connection(478136)-127.0.0.1] 2019-06-10 23:26:38,176 NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576","@timestamp":"2019-06-10T23:26:38.671Z","@version":"1","host":{"name":"b-gp2-cassdb2-5"}}

INFO [RMI TCP Connection(478136)-127.0.0.1] 2019-06-10 23:26:38,176 NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576

%{WORD:status}%{SPACE}[(?[^(]*)(%{NUMBER:pid})-%{IP:ipaddress}]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}

{
"ipaddress": "127.0.0.1",
"lineno": "176",
"statement": "NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576",
"pid": "478136",
"program": "RMI TCP Connection",
"status": "INFO",
"timestamp": "2019-06-10 23:26:38"
}

some how I keep receiving a log file with these patterns not matching. doesn't make sense unless there is a bug in logstash 7.1.1

Please do not post pictures of text, just post the text and format it using markdown so that spacing and other important information is preserved. You can select a block of text and select the </> option in the toolbar above the edit pane.

If you have something like

Foo Bar
Baz ...

then that will change it to

Foo     Bar
Baz     ...

I did put the text along with the image. do you have a response to my question?

Your post is not properly formatted, which may result in the browser consuming characters as markup when they are actually part of the event, and the grok needs to match it. I currently have no way of knowing what your event looks like.

INFO  [RMI TCP Connection(478136)-127.0.0.1] 2019-06-10 23:26:38,176  NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576

%{WORD:status}%{SPACE}\[(?<program>[^(]*)\(%{NUMBER:pid}\)-%{IP:ipaddress}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}

With that message and pattern I get

   "program" => "RMI TCP Connection",
       "pid" => "478136",
 "ipaddress" => "127.0.0.1",
   "message" => "INFO  [RMI TCP Connection(478136)-127.0.0.1] 2019-06-10 23:26:38,176  NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576",
"@timestamp" => 2019-06-11T09:24:42.613Z,
    "lineno" => "176",
    "status" => "INFO",
 "statement" => "NoSpamLogger.java:94 - Maximum memory usage reached (12666798080), cannot allocate chunk of 1048576"

and I do not get a _grokparsefailure tag. Is it possible you pointed path.config at a directory and you have a something.conf.bck or something.conf- that contains a different grok? logstash will concatentate all the files in path.config to form the configuration.

I have one config file

root@d-gp2-es46-4:/etc/logstash/conf.d# ls -ltrh
total 12K
-rwxr-xr-x 1 root root 12K Jun 4 23:05 ls_fb_postgres.conf

That suggests that there is another grok filter in that file. One that does not match.

this is the filter i have for cassandra. i have several patterns and it should just go one by one until it matches one. am i doing something wrong?

  else if "cassandra_log" in [tags] {
grok {
  match => { "message" => ["%{WORD:status}%{SPACE}\[%{WORD:program}:%{NUMBER:pid}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"] }
  match => { "message" => ["%{WORD:status}%{SPACE}\[%{WORD:program}#%{NUMBER:pid}:%{NUMBER:pid2}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"] }
  match => { "message" => ["%{WORD:status}%{SPACE}\[%{USERNAME:program}-%{INT:pid}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"] }
  match => { "message" => ["%{WORD:status}%{SPACE}\[(?<program>[^(]*)\(%{NUMBER:pid}\)-%{IP:ipaddress}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"] }
  match => { "message" => ["%{WORD:status}%{SPACE}\[(?<program>[^(]*)\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"] }
  remove_field => "message"
}
date {
  match => [ "timestamp", "YY-MM-dd HH:mm:ss" ]
  locale => en
  remove_field => "timestamp"
}

if "_grokparsefailure" not in [tags] {
  mutate {
    remove_field => [ "message", "@version" ]
  }
}

}

Not that I can see. For me that gets a match and no _grokparsefailure. Is it possible the event does not have the tag that you expect?

this all worked on 6.x without any issues. it's just when i went to 7.x that it's not working correctly sometimes. i'm getting messages from logstash into my indexes but not sure why random ones don't get processed correctly. I verified the tag and it's cassandra_log so it is correct.

I can get a _grokparsefailure if I set

pipeline.java_execution: true

in logstash.yml. I normally have that set to false. Setting it to true became the default in v7.

    grok {
        match => { "message" => [
            "%{WORD:status}%{SPACE}\[%{WORD:program}:%{NUMBER:pid}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}",
            "%{WORD:status}%{SPACE}\[%{WORD:program}#%{NUMBER:pid}:%{NUMBER:pid2}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}",
            "%{WORD:status}%{SPACE}\[(?<program>[^(]*)\(%{NUMBER:pid}\)-%{IP:ipaddress}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}",
            "%{WORD:status}%{SPACE}\[%{USERNAME:program}-%{INT:pid}\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}",
            "%{WORD:status}%{SPACE}\[(?<program>[^(]*)\]%{SPACE}%{TIMESTAMP_ISO8601:timestamp},%{NUMBER:lineno}%{SPACE}%{GREEDYDATA:statement}"
    ] }
    remove_field => "message"
}

works.

I tried setting in logstash.yml file but the service keeps bouncing on me.

pipeline.java_execution: false

[2019-06-11T16:43:30,831][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

[2019-06-11T16:51:07,962][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-06-11T16:51:08,394][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2019-06-11T16:51:10,678][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", password=>, hosts=>[//d-gp2-es46-1.imovetv.com:9200, //d-gp2-es46-2.imovetv.com:9200, //d-gp2-es46-3.imovetv.com:9200], sniffing=>false, manage_template=>false, id=>"6a55002c9e602408bc41ca0aa07151d93e7f5327d249d825a3a6989256a1c111", user=>"logstash_system", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_2678d30c-98ca-4672-aa5f-8f20e6533bbb", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-06-11T16:51:10,725][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2019-06-11T16:51:10,840][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_system:xxxxxx@d-gp2-es46-1.imovetv.com:9200/, http://logstash_system:xxxxxx@d-gp2-es46-2.imovetv.com:9200/, http://logstash_system:xxxxxx@d-gp2-es46-3.imovetv.com:9200/]}}
[2019-06-11T16:51:10,862][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@d-gp2-es46-1.imovetv.com:9200/"}
[2019-06-11T16:51:10,876][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-06-11T16:51:10,876][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2019-06-11T16:51:10,884][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@d-gp2-es46-2.imovetv.com:9200/"}
[2019-06-11T16:51:10,893][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@d-gp2-es46-3.imovetv.com:9200/"}
[2019-06-11T16:51:10,899][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//d-gp2-es46-1.imovetv.com:9200", "//d-gp2-es46-2.imovetv.com:9200", "//d-gp2-es46-3.imovetv.com:9200"]}
[2019-06-11T16:51:11,191][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0xaa31b4d sleep>"}
[2019-06-11T16:51:11,232][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>}
[2019-06-11T16:51:13,194][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-11T16:51:56,902][FATAL][logstash.runner ] An unexpected error occurred! {:error=>java.nio.channels.ClosedByInterruptException, :backtrace=>["java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)", "sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:216)", "org.logstash.ackedqueue.io.FileCheckpointIO.write(FileCheckpointIO.java:79)", "org.logstash.ackedqueue.Page.forceCheckpoint(Page.java:205)", "org.logstash.ackedqueue.Page.headPageCheckpoint(Page.java:170)", "org.logstash.ackedqueue.Page.checkpoint(Page.java:159)", "org.logstash.ackedqueue.Queue.ack(Queue.java:643)", "org.logstash.ackedqueue.Batch.close(Batch.java:37)", "org.logstash.ackedqueue.AckedBatch.close(AckedBatch.java:33)", "org.logstash.ackedqueue.AckedReadBatch.close(AckedReadBatch.java:77)", "org.logstash.execution.QueueReadClientBase.closeBatch(QueueReadClientBase.java:111)", "org.logstash.execution.QueueReadClientBase.rubyCloseBatch(QueueReadClientBase.java:130)", "org.logstash.execution.QueueReadClientBase$INVOKER$i$1$0$rubyCloseBatch.call(QueueReadClientBase$INVOKER$i$1$0$rubyCloseBatch.gen)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:168)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:317)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:92)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:204)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:191)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:208)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline.RUBY$block$start_workers$2(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:304)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:77)", "org.jruby.runtime.Block.call(Block.java:124)", "org.jruby.RubyProc.call(RubyProc.java:295)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.RubyProc.call(RubyProc.java:270)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.lang.Thread.run(Thread.java:748)"]}
[2019-06-11T16:51:57,013][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

That is covered by this open issue.

ok, seems that open issue is not resolved. i'm confused. i'm not doing anything special and it's rather simple. why would they change the java execution to true if it doesn't work? at this point is there anything i can do to get this working or am i waiting for basically another release 7.1.2?

I think you will have to delay your V7 migration until something gets fixed.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.