My aim is to include an identifier for logs of a few Jenkins Jobs that run in sequence.
Kindly give your opinion on the following --
Is there any way to append an ID such as '1234' to each document entry i.e each line of log by using logstash ? I am not planning to append any hash value , rather a readable ID .I can find ways to include an increment value for each log entry. However, I want to include only one ID such that a particular sequence of jobs can be identified.
Is it possible to pass a variable from a file to the filebeat index name ? The variable will hold an identifier such as a number.
Where do you get the value from? Is it available in all the documents, e.g. through a file path or something similar? How do you logically determine that two documents belong to the same job?
@Christian_Dahlqvist thanks for responding back. I get the value i.e the unique number in only one document entry of the first job of Jenkins. I don't get the number in every document entry. Had I got the number in every document entry I could have easily used Grok to filter and create a new Field. So comes the problem.
What I am planning to do -
Use grok to filter the number into a new field.
Use Ruby to write that field , which holds my unique number into a ruby file.
Use Ruby plugin to read from a path (the path will hold the ruby file with my unique number) and append a new field with my unique number to every document entry.
But I will have to create a new job at the beginning of the sequence of Jenkins jobs just to create the unique number.
Is my approach correct ? Any input that you would like to give?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.