Hi all! I am using Elastic Cloud to collect information from a custom log. I created a ingest pipeline but then found that I had to change/update a portion of the pipeline (a dissect processor) which went from something like:
%{source.ip} %{?user.ident} %{?user.name} [%{@timestamp}] \"%{http.request.method} %{url.original} HTTP/%{http.version}\" %{http.response.status_code} %{http.response.body.bytes}
to something like
%{source.ip} %{?user.ident->} %{?user.name} [%{@timestamp}] \"%{http.request.method} %{url.original} HTTP/%{http.version}\" %{http.response.status_code} %{http.response.body.bytes}
to compensate for the user.name field being prefixed with a space by user input. After updating that pipeline processor, are there steps that need followed to kick start the ingestion of data again because currently all I am seeing is filebeat connecting to a backoff url repeatedly while the log on the machine keeps getting new data added to it.
Thanks!