Conditional processing in txt file

Hi
I need to make a parsing data through logstash but in the one txt file consists four different pattern
How to make logstash for write to another index after encountering a given particular pattern.
I am also asking for help in setting up a grok pattern for sample data.

sample data raw:

# snapshot,65767220,20220601044503
# Network Elements

0097,s,n,2719,,s,,,,3,,p,

C313,s,n,4767,,s,,y,,,,,

C314,s,n,2701,,s,,y,,,,,

C0108,s,n,14101,,g,,,,,,p,

CC110,s,n,14641,,s,,,,3,,r,

CC111,s,n,14642,,s,,,,3,,r,

CC112,s,n,14641,,s,,,,3,,r,

CC113,s,n,14640,,s,,,,3,,r,

CC114,s,n,14642,,s,,,,3,,r,

# DNs

C0108,,1,,,,,,,,,-2

146746481,,,CC318,,,,,,,,-2

158656701,,,,C1597,,,,,,,-2

483895991,,1,,C4898,,,,,,,-2

487359076,,,,CC40,,,,,,,-2

487359078,,,,CC40,,,,,,,-2

487359077,,,,CC40,,,,,,,-2

602560130,,,CC191,,,,,,,,-2

602560131,,,CC191,,,,,,,,-2

# DN Blocks

224135896,224135897,,,,,,,,,,,

606019700,606019799,,CC118,,,,,,,,,

728441381,728441390,0,,CC40,,,,,,,,

4873590760,4873590789,0,,CC40,,,,,,,,

48120000000,48122112649,0,,,,,,,,,,

48122112650,48122112659,0,,CC40,,,,,,,,

# IMEIs

00000801163158,0,n,n,y

00001004020051,0,n,n,y

00006053206182,0,n,n,y

00006053303925,0,n,n,y

00007504958630,0,n,n,y

00009053373401,0,n,n,y

00060633330647,0,n,n,y

00090533747262,0,n,n,y

header for particular rows =>

# Network Elements:
%{ID},%{Type},%{PCType},%{PC},%{GC},%{RI},%{SSN},%{CCGT},%{NTT},%{NNAI},%{NNP},%{DA},%{SRFIMSI}

# DNs:
%{DN},%{IMSI},%{PT},%{SP},%{RN},%{VMS},%{GRN},%{ASD},%{ST},%{NSDN},%{CGBL},%{CDBL}

# DN Blocks
%{BDN},%{EDN},%{PT},%{SP},%{RN},%{VMS},%{GRN},%{ASD},%{ST},%{NSDN},%{CGBL},%{CDBL}

# IMEIs
%{IMEI},%{SVN},%{WHITE},%{GRAY},%{BLACK}

Many Thanks

the main case is: How to set the condition for multiply pattern for 4 blocks of different kind of data: "# Network Elements" "#DNs" "# DN Blocks" "# IMEIs"
But on the other hand dissect only supports one mapping for each field.

Any idea for approach ?

Can anyone help to find out the solution?

See this thread.