Grok filter not extracting fields

I've also just checked my filebeat-* index pattern in kibana management, the fields aren't showing up there either.

I've tried restarting kibana/logstash/filebeat. I've also deleted the filebeat registry file as I've heard that can sometimes resolve issues.

With that filter active I get nothing moving into kibana at all, so it must be an issue with the filter. Any ideas?

If Logstash can't post a document to ES it'll tell you about it in its logfile.

Logstash only seems to be writing out to its log file when initially starting up;

[2018-04-13T15:53:38,758][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-04-13T15:53:38,763][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-04-13T15:53:38,767][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-04-13T15:53:38,778][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//172.19.32.154:9200"]}
[2018-04-13T15:53:39,455][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-04-13T15:53:39,531][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main2", :thread=>"#<Thread:0x73b67d6c@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[2018-04-13T15:53:39,545][INFO ][logstash.agent           ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main2"]}
[2018-04-13T15:53:39,558][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-04-13T15:53:39,580][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-04-13T20:03:31,459][WARN ][logstash.runner          ] SIGHUP received.

This is /var/log/logtstash/logstash-plain.log - If there is another log file I should be looking at could you let me know the path please? I've also checked logstash-stderr and stdout.

Logstash doesn't log that much by default. You can always crank up the loglevel if you're unsure whether the logging works. Assuming that it does work what you posted above indicates that everything is working fine. How do you know that Logstash's beats inputs is receiving any messages?

Looks like my log files are being pulled in, setting the log level to debug returns this;

[2018-04-16T09:31:35,981][DEBUG][logstash.pipeline        ] output received {"event"=>{"source"=>"/var/log/logstash/logstash-plain.log", "prospector"=>{"type"=>"log"}, "offset"=>3328895, "@timestamp"=>2018-04-16T08:31:31.791Z, "tags"=>["beats_input_codec_plain_applied"], "host"=>"elastic01", "message"=>"[2018-04-16T09:31:21,748][DEBUG][logstash.pipeline        ] output received {\"event\"=>{\"source\"=>\"/home/log/TASK_ENGINE_7.log\", \"prospector\"=>{\"type\"=>\"log\"}, \"offset\"=>81381209, \"@timestamp\"=>2016-01-09T10:51:15.031Z, \"tags\"=>[\"beats_input_codec_plain_applied\"], \"host\"=>\"hostname02app.domain.internal\", \"message\"=>\"2016-01-09 10:51:14,479 [TASK_ENGINE_7] INFO  Engine.checkEngineParametrandomprogram(185) - TASK_ENGINE_7 : After Sleep : N : 2 : 1000\", \"@vrandomprogramion\"=>\"1\", \"beat\"=>{\"name\"=>\"hostname02app.domain.internal\", \"vrandomprogramion\"=>\"6.2.3\", \"hostname\"=>\"hostname02app.domain.internal\"}}}", "@vrandomprogramion"=>"1", "beat"=>{"name"=>"elastic01", "vrandomprogramion"=>"6.2.3", "hostname"=>"elastic01"}}}
[2018-04-16T09:31:35,981][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:5044, remote: 127.0.0.1:55048] Sending a new message for the listener, sequence: 16
[2018-04-16T09:31:35,981][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:5044, remote: 127.0.0.1:55048] Sending a new message for the listener, sequence: 17
[2018-04-16T09:31:35,981][DEBUG][org.logstash.beats.BeatsHandler] [local: 127.0.0.1:5044, remote: 127.0.0.1:55048] Sending a new message for the listener, sequence: 18

Checking kibana my fields still haven't appeared.

I really want to remove the Elasticsearch dependency and verify the Filebeat and Logstash setup first. It's highly suspect that the stdout output didn't appear to be produce anything. Try using a file output instead. Are events flowing into that file?

I'm trying to output to a file, this is the output i've set in my config;

output {
  file {
    path => /etc/logstash/debuglog
  }
}

Logstash gives me this error;
ConfigurationError", :message=>"Expected one of #, \", ', -, [, { at line 17, column 13

The corresponds to the "/" at the start of the path to the file. My syntax looks correct comparing to the example in the documentation, so i'm not sure what's causing the crash.

Sorted that now, didn't realise it needed to be in square brackets, I don't think that's mentioned in the file output documentation page.

{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:49,398 [TASK_ENGINE_7] INFO  Engine.checkEngineParameters(185)  - TASK_ENGINE_7 : Before Sleep : N : 2 : 1000","offset":81543218,"source":"/home/eposc02/ers/log/TASK_ENGINE_7.log","@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:31,198 [TASK_ENGINE_9] INFO  Engine.checkEngineParameters(185)  - TASK_ENGINE_9 : After Sleep : N : 20 : 500","source":"/home/eposc02/ers/log/TASK_ENGINE_9.log","offset":56623837,"@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:36,259 [MonitorSessCons] INFO  ConnectionManager.closeFreeConn ections(445) - Closed 0 connections for pool repwfm","source":"/home/eposc02/ers/log/DFR_ENGINE.log","offset":65014757,"@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:35,922 [ROUTING_ENGINE] DEBUG RouteFilesEngine.processRun(406)  - Done reading some files for this loop.","offset":82403066,"source":"/home/eposc02/ers/log/ROUTING_ENGINE.log","@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:58,984 [COMPLETION_ENGINE] INFO  Engine.checkEngineParameters( 185) - COMPLETION_ENGINE : Before Sleep : N : 40 : 3000","source":"/home/eposc02/ers/log/COMPLETION_ENGINE.log","offset":9282710,"@timestamp":"2016-01-09T12:56:02.60 9Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:59,788 [REFERRAL_ENGINE] INFO  Engine.checkEngineParameters(18 5) - REFERRAL_ENGINE : After Sleep : N : 10 : 2000","offset":8422731,"source":"/home/eposc02/ers/log/REFERRAL_ENGINE.log","@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:01:42,694 [FAKE_ACK_ENGINE] DEBUG AutoAckEngineTestOnly.processRu n(236) - Done reading some files for this loop.","source":"/home/eposc02/ers/log/FAKE_ACK_ENGINE.log","offset":1359969,"@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:02:34,090 [TASK_ENGINE_5] INFO  Engine.checkEngineParameters(185)  - TASK_ENGINE_5 : Before Sleep : N : 2 : 10000","source":"/home/eposc02/ers/log/TASK_ENGINE_5.log","offset":9302560,"@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:04:33,773 [DATAFLOW_ENGINE] INFO  Engine.checkEngineParameters(18 5) - DATAFLOW_ENGINE : After Sleep : N : 2 : 10000","offset":8411106,"source":"/home/eposc02/ers/log/DATAFLOW_ENGINE.log","@timestamp":"2016-01-09T12:56:02.609Z"}
{"@version":"1","beat":{"hostname":"eposc02app.e4bh.internal","version":"6.2.3","name":"eposc02app.e4bh.internal"},"prospector":{"type":"log"},"tags":["beats_input_c odec_plain_applied","_grokparsefailure"],"host":"eposc02app.e4bh.internal","message":"2016-01-09 11:04:33,772 [BACS_ENGINE] INFO  Engine.checkEngineParameters(185) -  BACS_ENGINE : After Sleep : N : 1 : 10000","source":"/home/eposc02/ers/log/BACS_ENGINE.log","offset":7895490,"@timestamp":"2016-01-09T12:56:02.609Z"}

I can see I'm getting a grok parsing failure for the files which are moving in to logstash - I'm not sure if there is anything wrong with the grok I posted above, as the grok constructor thinks it's alright. Any suggestions?

Also, logstash now crashes after a short time with the error "Error writing to stream /var/log/logstash/logstash-plain.log". This doesn't happen when using the normal output (which writes to logstash-plain.log)

The corresponds to the "/" at the start of the path to the file. My syntax looks correct comparing to the example in the documentation, so i'm not sure what's causing the crash.

Which example in the documentation gave you the impression that the path wouldn't have to be quoted?

This section here;

Description
This output writes events to files on disk. You can use fields from the event as parts of the filename and/or path.

By default, this output writes one event per line in json format. You can customise the line format using the line codec like

output {
 file {
   path => ...
   codec => line { format => "custom format: %{message}"}
 }
}

As I just glanced at this for the syntax - The example further down the page however does have the path in quotes, so this is more my mistake, though I do think it could be made a little more user-friendly by putting the ellipses in quotes to give it some more visibility.

I did try wrapping the path in quotes which didn't resolve the issue, I had to put it in square brackets and quotes to get it working.

I can see I'm getting a grok parsing failure for the files which are moving in to logstash - I'm not sure if there is anything wrong with the grok I posted above, as the grok constructor thinks it's alright. Any suggestions?

The grok expression

%{TIMESTAMP_ISO8601:Date} %{SYSLOG5424SD:Client_IP} %{WORD:Log_Level} %{JAVACLASS:Java_Class} %{GREEDYDATA:Freeform_Text}'

doesn't match

2016-01-09 11:01:49,398 [TASK_ENGINE_7] INFO  Engine.checkEngineParameters(185)  - TASK_ENGINE_7 : Before Sleep : N : 2 : 1000

because JAVACLASS only matches "Engine.checkEngineParameters" and then you require a space afterwards but you actually get "(185)".

So from looking at the log files i'm trying to import they all have slightly different syntax/entries on each line. All the logs are written to & pulled from the same folder - I'm guessing I have to set up a pipeline for each log file I'm importing then?

Some of the log files I have do match the grok expression perfectly, will grok not filter anything if it runs into an error parsing a line?

If the grok filter can't match the field against any of the expressions given (yes, you can list multiple expressions that will be tried in order) it'll tag the event _grokparsefailure and Logstash will continue with the remaining filters. Having one Logstash pipeline for each kind of log isn't necessary.

And I'm guessing to do that I just need to use multiple match => lines in the config?

That might work but the documentation of the match option suggests that you use an array of strings.

Okay, i'll give that a go now. Thank you for all your help Magnus - Sorry if I asked any stupid/basic questions, i'm quite new to this whole thing.

Thanks again!

Hi Magnus,

I've created a rudimentary multiple match filter, outputting to a file works as expected, but when I try to output to elasticsearch I get the following error messages in the log.

[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>125}
[2018-04-16T15:12:10,332][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>500, :url=>"http://localhost:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}

Logstash was working fine outputting to elasticsearch when my filter was set to GREEDYDATA only. Now nothing lets me output to elasticsearch. Outputting to file still appears to be working correctly. I still have 3.5gB of disk space free on the drive.

For some reason the index has been set to read only. Check your ES logs and see if FORBIDDEN/12/index read-only / allow delete (api)] is helpful.

HI Magnus,

Solved it. Logstash had created a 55GB log file whilst it'd been writing to file - This had filled the disk & locked the indices, but didn't resolve itself after deleting the file.

I've ran the following command curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' which has resolved the issue.

From a google search it looks like this is an issue quite a few users are experiencing.

Anyway, it's all working now. Thanks a bunch for your help Magnus! Wouldn't have been able to do it without your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.