Start by matching the first field on the line. Do not try to match anything more than that. I never use grok debuggers, I use logstash. Start a copy of logstash with
--config.reload.automatic
enabled. That way you only pay the startup cost once, and it will reload the configuration and reinvoke the pipeline each time you modify the configuration. I would start with
input { generator { count => 1 lines => [ '2020/01/02 08:40:16 UUID: 5E82093B:7550_B0092619:01BB_5E0DAC6F_33A27FC:05AD - URL: https://endpoint.point/path/to/api 0.011636824 elapsed(s)' ] } }
filter {
grok {
pattern_definitions => { "MYDATETIME" => "%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{TIME}" }
match => { "message" => "^%{MYDATETIME:time} " }
}
}
output { stdout { codec => rubydebug { metadata => false } } }
I am not aware of a pattern that ships with logstash that matches your date/time format, so I defined one myself. Once that works edit the configuration (in another window) and write it out. logstash will process it.
Note that I anchor the pattern using ^, so it has to match at the start of the line. Read this to understand why.
Once you have that working start adding fields. You should end up with something like
match => { "message" => "^%{MYDATETIME:time} UUID: %{NOTSPACE:uuid} - URL: %{URI:uri} %{NUMBER:elapsed:float} " }
To answer your specific question:
1 - If the amount or type of space (tab versus space) between two fields is variable you can use \s+ (that's one or more characters that count as whitespace). If the space is missing sometimes you can use \s* (zero or more).
2 - If a field is optional you can wrap it in ( and )? -- hard to say more without examples.
3 - I used NOTSPACE to capture the UUID.