Grok filter not extracting fields

So from looking at the log files i'm trying to import they all have slightly different syntax/entries on each line. All the logs are written to & pulled from the same folder - I'm guessing I have to set up a pipeline for each log file I'm importing then?

Some of the log files I have do match the grok expression perfectly, will grok not filter anything if it runs into an error parsing a line?

If the grok filter can't match the field against any of the expressions given (yes, you can list multiple expressions that will be tried in order) it'll tag the event _grokparsefailure and Logstash will continue with the remaining filters. Having one Logstash pipeline for each kind of log isn't necessary.

And I'm guessing to do that I just need to use multiple match => lines in the config?

That might work but the documentation of the match option suggests that you use an array of strings.

Okay, i'll give that a go now. Thank you for all your help Magnus - Sorry if I asked any stupid/basic questions, i'm quite new to this whole thing.

Thanks again!

Hi Magnus,

I've created a rudimentary multiple match filter, outputting to a file works as expected, but when I try to output to elasticsearch I get the following error messages in the log.

[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,337][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-04-16T15:12:00,339][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>125}
[2018-04-16T15:12:10,332][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>500, :url=>"http://localhost:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s"}

Logstash was working fine outputting to elasticsearch when my filter was set to GREEDYDATA only. Now nothing lets me output to elasticsearch. Outputting to file still appears to be working correctly. I still have 3.5gB of disk space free on the drive.

For some reason the index has been set to read only. Check your ES logs and see if FORBIDDEN/12/index read-only / allow delete (api)] is helpful.

HI Magnus,

Solved it. Logstash had created a 55GB log file whilst it'd been writing to file - This had filled the disk & locked the indices, but didn't resolve itself after deleting the file.

I've ran the following command curl -XPUT -H "Content-Type: application/json" https://[YOUR_ELASTICSEARCH_ENDPOINT]:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' which has resolved the issue.

From a google search it looks like this is an issue quite a few users are experiencing.

Anyway, it's all working now. Thanks a bunch for your help Magnus! Wouldn't have been able to do it without your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.