Hi all! I am using Elastic Cloud to collect information from a custom log. I created a ingest pipeline but then found that I had to change/update a portion of the pipeline (a dissect processor) which went from something like:
to compensate for the user.name field being prefixed with a space by user input. After updating that pipeline processor, are there steps that need followed to kick start the ingestion of data again because currently all I am seeing is filebeat connecting to a backoff url repeatedly while the log on the machine keeps getting new data added to it.
You're right, it does get used immediately. I finally found where I could turn on debug mode for the integration (in this case custom logs) and lo and behold a programmer had created his/her own format for the log on one of the servers this ingest pipeline was handling which gummed up the works for just about everything. I am considering just removing the policy from the offending machine and creating a custom pipeline for the "custom" log.
I appreciate the response and the confirmation of a suspicion that there was nothing to do and something else was wrong.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.