Elastic Search -> Ingest Node -> processor -> grok


#1

Hi,

I set up the following pipeline in ES 5.0.

PUT _ingest/pipeline/ngb
{
  "description" : "NGB PIPELINE",
  "processors" : [
    {"set" : {"field": "_index","value": "ngb"}},
    {"grok": {"field": "message",
      "patterns": [""]
    }}
    
  ]
}

In filebeat I have the following config:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["ipAddr:9200"]
  pipeline: ngb

Events are being forwarded to ES but the key="value" pair's of the event are not ingested according to the index ngb data mapping.

This is not the case when logstash is used. Events are parsed and ingest according to the index ngb data mapping.

For reference this is the logstash.conf:

output {
    elasticsearch {
        hosts => [ "10.146.84.65:9200" ]
        index => ["ngb"]
#       "fielddata" : { "format" : "disabled" }
    }

The question is:
In the ingest pipeline processor do I have to explicitly define the pattern?

Thanks,
Lp


(Tal Levy) #2

Events are being forwarded to ES but the key="value" pair's of the event are not ingested according to the index ngb data mapping.

do you mean to say that the events from filebeat are not being indexed into the index named ngb? or that the pipeline [ngb] is not being applied?


#3

The events are indexed in the indicated index. The issue is that the grok processor needs to be instructed to extract the key value pairs. This is not the case in logstash. If the event is Key=Value logstash will automatically extract the value. I was expecting to have the same behavior in the ingest node grok processor.
The lack of this capability imposes some operational issues. I have not been able to find a similar work around.


#4

Logstash Filter KV does exactly what I was looking for. Unfortunately, this filter is not found in the ingest node.


(system) #5