How to add all field values in a single field?

Hii @AquaX.Thanks for your reply.I have used multiline codec to parse this file,but I am not getting the desired output.This is my error log file.

**taskid: 1 hostname: gadi-cpu-clx-2197.gadi.nci.org.au**
 module_io_quilt_old.F        2931 F
Quilting with   1 groups of   0 I/O tasks.
 Ntasks in X           32 , ntasks in Y           36
WRF V4.1.2 MODEL
 *************************************
 Parent domain
 ids,ide,jds,jde            1         540           1         363
 ims,ime,jms,jme           11          41          -4          18
 ips,ipe,jps,jpe           18          34           1          11
 *************************************
DYNAMICS OPTION: Eulerian Mass Coordinate
 alloc_space_field: domain            1 ,               27827120  bytes allocated
 RESTART run: opening wrfrst_d01_1979-12-19_00:00:00 for reading
 Input data is acceptable to use:
Max map factor in domain 1 =  1.26. Scale the dt in the model accordingly.
 SOIL TEXTURE CLASSIFICATION = STAS FOUND          19  CATEGORIES
ThompMP: computing qr_acr_qg
ThompMP: computing qr_acr_qs
ThompMP: computing freezeH2O
 *************************************
 Nesting domain
 ids,ide,jds,jde            1         616           1         501
 ims,ime,jms,jme           11          50          -4          25
 ips,ipe,jps,jpe           21          40           1          14
 INTERMEDIATE domain
 ids,ide,jds,jde          203         331          88         193
 ims,ime,jms,jme          199         222          83         102
 ips,ipe,jps,jpe          209         212          86          92
 *************************************
 alloc_space_field: domain            2 ,                6245760  bytes allocated
 alloc_space_field: domain            2 ,               44879412  bytes allocated
 RESTART: nest, opening wrfrst_d02_1979-12-19_00:00:00 for reading
 Input data is acceptable to use:
 SOIL TEXTURE CLASSIFICATION = STAS FOUND          19  CATEGORIES
 Input data is acceptable to use:
Input data processed for aux input   4 for domain   1
 Input data is acceptable to use:
WRF restart, LBC starts at 1979-12-19_00:00:00 and restart starts at 1979-12-19_00:00:00
 LBC for restart: Starting valid date = 1979-12-19_00:00:00, Ending valid date = 1979-12-19_01:00:00
 LBC for restart: Restart time            = 1979-12-19_00:00:00
 LBC for restart: Found the correct bounding LBC time periods
 LBC for restart: Found the correct bounding LBC time periods for restart time = 1979-12-19_00:00:00
WRF NUMBER OF TILES FROM OMP_GET_MAX_THREADS =   1
 Tile Strategy is not specified. Assuming 1D-Y
WRF TILE   1 IS     18 IE     34 JS      1 JE     11
WRF NUMBER OF TILES =   1
 Input data is acceptable to use:
Input data processed for aux input   4 for domain   2
WRF NUMBER OF TILES FROM OMP_GET_MAX_THREADS =   1
 Tile Strategy is not specified. Assuming 1D-Y
WRF TILE   1 IS     21 IE     40 JS      1 JE     14
WRF NUMBER OF TILES =   1
 Input data is acceptable to use:
Input data processed for aux input   4 for domain   1
 Input data is acceptable to use:
 Input data is acceptable to use:
**forrtl: error (78): process killed (SIGTERM)**
**Image              PC                Routine            Line        Source             **
**wrf.exe            00000000033C6904  for__signal_handl     Unknown  Unknown**
**libpthread-2.28.s  000014BC5D449B20  Unknown               Unknown  Unknown**
**libuct.so.0.0.0    000014BC487376E4  uct_mm_iface_prog     Unknown  Unknown**
**libucp.so.0.0.0    000014BC4896B0EA  ucp_worker_progre     Unknown  Unknown**
**mca_pml_ucx.so     000014BC48DCC317  mca_pml_ucx_progr     Unknown  Unknown**

My goal is to write a logstash configuration pipeline that only extracts the first line i.e "taskid: 1 hostname: gadi-cpu-clx-2197.gadi.nci.org.au" and then it should extract the remaining data from this line till the end i.e "forrtl: error (78): process killed (SIGTERM)...." and so on

I have written a logstash configuration pipeline for doing this but it is taking all the data as a single line.This is my logstash configuration pipeline.

input {
      file {
            path => "/root/matthew/rsl.error.0001"
            start_position => "beginning"
            codec => multiline {
            charset => "UTF-8"
            pattern => "[a-zA-Z]+:\s+\d\s[a-zA-Z]+:\s+[a-zA-Z][a-zA-Z][a-zA-Z][a-zA-Z]-[a-zA-Z]+-[a-zA-Z]+-[0-9]+\.[a-zA-Z]+\.[a-zA-Z]+\.[a-zA-Z]+\.[a-zA-Z][a-zA-Z]*.\s[a-zA-Z]+:(\s+([a-zA-Z]+\s+)+)\(\d\d\):\s+([a-zA-Z]+( [a-zA-Z]+)+)\s+\([a-zA-Z]+\)\s([a-zA-Z]+( +[a-zA-Z]+)+)\.[a-zA-Z]+\s+([0-9]+([a-zA-Z]+[0-9]+)+)\s\s[a-zA-Z][a-zA-Z][a-zA-Z]__([a-zA-Z]+(_[a-zA-Z]+)+)(\s+([a-zA-Z]+\s+)+)[a-zA-Z]+([+-]?(?=\.\d|\d)(?:\d+)?(?:\.?\d*))(?:[eE]([+-]?\d+))?\.[a-zA-Z]\s\s([0-9]+([a-zA-Z]+[0-9]+)+)\s\s[a-zA-Z]+\s+[a-zA-Z]+\s+[a-zA-Z]+"
            negate => "true"
            what => "previous"
            auto_flush_interval => 5
            }
            sincedb_path => "/dev/null"
       }
}
output {
    stdout {}
}

Please provide me a regex pattern to achieve this