I've got 8000 log files being imported to Logstash via Filebeat. The log files all look something like this:
START-OF-LOG: 3.0
LOCATION: DX
CALLSIGN: SP1EGN
CLUB: SP-CW-C
CONTEST: CQ-WW-CW
CATEGORY-OPERATOR: SINGLE-OP
CATEGORY-ASSISTED: NON-ASSISTED
CATEGORY-BAND: 40M
CATEGORY-MODE: CW
CATEGORY-POWER: LOW
CATEGORY-STATION: FIXED
CATEGORY-TIME: 6-HOURS
CATEGORY-TRANSMITTER: ONE
CATEGORY-OVERLAY: CLASSIC
CLAIMED-SCORE: 42
OPERATORS: SP1EGN
NAME: Robert Nowak
CERTIFICATE: NO
CREATED-BY: N1MM Logger+ 1.0.6903.0
QSO: 7000 CW 2017-11-26 1738 SP1EGN 599 15 EC2DX 599 14
QSO: 7000 CW 2017-11-26 1748 SP1EGN 599 15 RZ7T 599 16
QSO: 7000 CW 2017-11-26 1753 SP1EGN 599 15 US1Q 599 16
QSO: 7000 CW 2017-11-26 1805 SP1EGN 599 15 RX7M 599 16
QSO: 7000 CW 2017-11-26 2055 SP1EGN 599 15 IO2X 599 15
QSO: 7000 CW 2017-11-26 2102 SP1EGN 599 15 UA7K 599 16
END-OF-LOG:
All the rows above the QSO rows are common attributes of all the QSO rows. The number of QSO rows can go up to 15,000 in a single log file. I want to add the attribute rows as key:value pairs to all the QSO documents so they can be used as dimensions in querying and grouping. There is no guarantee that the number and names of the key:value pairs will be the same in all the input files. At this point, I have a pipeline definition that successfully parses each row type. But we want to merge those common attributes into the QSO documents.
I don't know how to do this and whether this should be done in the Logstash pipeline or in the Filebeat config.
Any assist would be appreciated.