Creating a tag out of the extracted field from the message

Hi

I have parsed a log using "COMBINEDAPACHELOG".
filter{
grok {
match => [ "message", "%{COMBINEDAPACHELOG}" ]
}
}

It's working well but now i need to extract string from the message tag and crerate a new tag out of the extracted one.I am not sure of how to do it.

Any help would be highly appreciated.

I suspect you mean "field" rather than "tag". Both exists as Logstash concepts, and message is a field rather than a tag.

It's not clear exactly what you want to do. Please give an example. The COMBINEDAPACHELOG pattern already extracts fields from all parts of an Apache log entry.

yeah..my bad..Message is a field..

TO be more specific :

I have following lines in log:
15/10/12 21:21:30 DEBUG db.DBConfiguration: Fetching password from job credentials store
15/10/12 21:21:30 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '1=1' and upper bound '1=1'
15/10/12 21:21:31 INFO mapred.JobClient: Running job: job_201510101XXXXX
15/10/12 21:21:32 INFO mapred.JobClient: map 0% reduce 0%
15/10/12 21:22:13 INFO mapred.JobClient: map 100% reduce 0%
15/10/12 21:22:31 INFO mapred.JobClient: Job complete: job_20151010XXX
15/10/12 21:22:31 INFO mapred.JobClient: Counters: 23
15/10/12 21:22:31 INFO mapred.JobClient: File System Counters
15/10/12 21:22:31 INFO mapred.JobClient: FILE: Number of bytes read=0
15/10/12 21:22:31 INFO mapred.JobClient: FILE: Number of bytes written=190293
15/10/12 21:22:31 INFO mapred.JobClient: FILE: Number of read operations=0
15/10/12 21:22:31 INFO mapred.JobClient: FILE: Number of large read operations=0
15/10/12 21:22:31 INFO mapred.JobClient: FILE: Number of write operations=0
15/10/12 21:22:31 INFO mapred.JobClient: HDFS: Number of bytes read=87
15/10/12 21:22:31 INFO mapred.JobClient: HDFS: Number of bytes written=26807938
15/10/12 21:22:31 INFO mapred.JobClient: HDFS: Number of read operations=2
15/10/12 21:22:31 INFO mapred.JobClient: HDFS: Number of large read operations=0
15/10/12 21:22:31 INFO mapred.JobClient: HDFS: Number of write operations=1
15/10/12 21:22:31 INFO mapred.JobClient: Job Counters
15/10/12 21:22:31 INFO mapred.JobClient: Launched map tasks=1
15/10/12 21:22:31 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=43939
15/10/12 21:22:31 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
15/10/12 21:22:31 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
15/10/12 21:22:31 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
15/10/12 21:22:31 INFO mapred.JobClient: Map-Reduce Framework
15/10/12 21:22:31 INFO mapred.JobClient: Map input records=134771
15/10/12 21:22:31 INFO mapred.JobClient: Map output records=134771
message:15/10/12 04:18:07 INFO mapred.JobClient: Running job: job_201510101XXX

Now i want to extract job_201510101XXX and Map output records and put them in separate field(after creating the field) so that i could create a graph in kibana ..job_id v/s recods..

But i am not sure of how to do that...

Extracting the job id is easy for that line, but it sounds like you want that job id to be connected to lots of other log records. Correct?

yeah..exactly..

I suspect you can use the aggregate filter for this.