Need help trying to break down some data. This is the data being sent
{"timestamp": "2020-11-17T19:02:19.352Z","sequence": 2460137,"deviceName": "OKUMA.MachiningCenterMA650-EAST","deviceUUID": "OKUMA.MachiningCenterMA650-EAST.190056","componentId": "Mp1","dataItemId": "Mp1ProgramHeader","Events": { "ProgramHeader": { "name": "p1ProgramHeader", "@@data": "(CIMATRON E13)( FILE NAME:24625_DET11B_BSH)( OKUMA PROGRAM )( rob.mank )( Monday November 9, 2020 - 1:22:43 PM )(slab top)( TOOL NAME: 2.0 INGER .06R 4.0 )( TOOL DIAMETER......: 2. )" } } }
Here is my logstash.conf
input {
file {
start_position => "beginning"
path => "/home/eric/logstash-csv/MTConnect-OKUMA.test.log"
codec => "json"
sincedb_path => "/dev/null"
}
}
filter {
json {
source => "message"
}
if [Events][ProgramHeader][@@data] =~ /\([CIMATRON E13)]+\)/ {
grok {
match => [ "\((?<custom_field-1>[^)]+)\)\((?<custom_field-2>[^)]+)\)\((?<custom_field-3>[^)]+)\)\((?<custom_field-4>[^)]+)\)\((?<custom_field-5>[^)]+)\)\((?<custom_field-6>[^)]+)\)\((?<custom_field-7>[^)]+)\)\((?<custom_field-8>[^)]+)\)" ]
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "mt-18"
}
stdout {}
}
Basically if Events.ProgramHeader.@@data
contains (CIMATRON E13)
I want to GROK that data (in Events.ProgramHeader.@@data
) into individual fields. When I run my GROK Pattern through the GROK debugger it seems to work fine. When I run in Logstash I get the following error
filter {
grok {
# This setting must be a hash
# This field must contain an even number of items, got 1
Does my GROK need to be formatted different? Am I going down the wrong path with this?