"[pipeline] required property is missing"

I need to monitor exchange between 2 servers. I'm trying to use Packetbeat + Elasticsearch for that (without ). I keep getting warnings in the log:

client.go:465: WARN Can not index event (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse [source]","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:186"}}

I followed the advice from here: Conditions, Pipelines and GeoIP - #2 by ruflin , took a JSON event from the debug log, posted it to the server and got 400 error:

    {
      "error":{
        "root_cause":[
          {
            "type": "parse_exception",
            "reason": "[pipeline] required property is missing",
            "header":{"property_name": "pipeline"}
          }
        ],
        "type": "parse_exception",
        "reason": "[pipeline] required property is missing",
        "header":{"property_name": "pipeline"}
      },
      "status": 400
    }

What is "pipeline" property? I haven't found it in the references.

Elasticsearch version 6.1.0, Packetbeat versions 6.1.0 and 6.1.1. 64-bit Windows.

Can you please share the Packetbeat configuration that you are using as well as the JSON event from the debug log.

Example of the posted data:

{
  "@timestamp": "2018-01-15T15:20:50.500Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "dest": {
    "port": 137,
    "mac": "ff:ff:ff:ff:ff:ff",
    "ip": "xx.xx.xx.255"
  },
  "last_time": "2018-01-15T14:43:50.346Z",
  "type": "flow",
  "flow_id": "EQIA////DP////8U//8BAAEAIYVi5fT///////9SYcl5UmHJ/4kAiQA",
  "final": false,
  "source": {
    "stats": {
      "net_bytes_total": 736,
      "net_packets_total": 8
    },
    "mac": "11:22:33:44:55:66",
    "ip": "xx.xx.xx.121",
    "port": 137
  },
  "start_time": "2018-01-15T14:43:41.010Z",
  "transport": "udp",
  "beat": {
    "version": "6.1.1",
    "name": "lab217",
    "hostname": "lab217"
  }
}

Configuration, packetbeat.yml:

packetbeat.interfaces.device: 0
packetbeat.flows:
  timeout: 30s
  period: 10s

packetbeat.protocols:
- type: icmp
  enabled: false
- type: amqp
  enabled: false
- type: cassandra
  enabled: false
- type: dns
  enabled: false
- type: http
  enabled: true
  ports: [7189]
- type: memcache
  enabled: false
- type: mysql
  enabled: false
- type: pgsql
  enabled: false
- type: redis
  enabled: false
- type: thrift
  enabled: false
- type: mongodb
  enabled: false
- type: nfs
  enabled: false
- type: tls
  enabled: false

setup.template.settings:
  index.number_of_shards: 3

setup.template.name: "packetbeat"
setup.template.fields: "fields.yml"
setup.template.overwrite: false

setup.dashboards.enabled: true

setup.kibana:
  host: "xx.xx.xx.247:5601"

output.elasticsearch:
  hosts: ["xx.xx.xx.247:9200"]

logging.level: debug

Other configs are default.

Since you aren't using any pipelines in your Packetbeat config that can't be the issue.

The error you are seeing indicates some kind of issue with the data arriving at Elasticsearch.

  • Is the data from Packetbeat to ES passing through any proxies, reverse proxies, WAFs?
  • Could you tcpdump the HTTP traffic on port 9200 between Packetbeat and ES? I think this will be the best way to observe the actual JSON object leaving Packetbeat in order to check for validity. The logged JSON event is not the same as what get sent to ES.
    tcpdump -w http-to-es.pcap -i eth0 tcp port 9200

Then when the error occurs you can open up the PCAP in Wireshark and find the associate _bulk POST request containing the JSON.

Is the data from Packetbeat to ES passing through any proxies, reverse proxies, WAFs?

No.

Could you tcpdump the HTTP traffic on port 9200 between Packetbeat and ES?

No. Besides, there's no Linux machines in that network. Can PB and ES logs help?

There's nothing in Packetbeat now to log that data. We could introduce a new debug selector for Beats that logs the raw data being sent to ES but it would be in a future version.

You could run Wireshark on Windows to capture the traffic. Or run another instance of Packetbeat that captures only HTTP traffic on 9200 and writes the output to a file.

run another instance of Packetbeat that captures only HTTP traffic on 9200 and writes the output to a file.

What do I need to capture? Only the packets with source 217 and destination 247? Do I need the replies? Do I need packets from 217 with destination 255?

Update: I ran Packetbeat on the Elasticsearch machine. I filtered packets by IP like above:

processors.0.drop_event.when:
  and:
    - or:
        - not.equals.source.ip: xx.xx.xx.217
        - not.equals.dest.ip: xx.xx.xx.247
    - or:
        - not.equals.source.ip: xx.xx.xx.247
        - not.equals.dest.ip: xx.xx.xx.217

and dumped its output to file with "output.file.enabled: true".

This is what I got:

{
  "@timestamp": "2018-01-18T10:18:50.014Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "source": {
    "mac": "aa:bb:cc:dd:ee:ff",
    "ip": "xx.xx.xx.217",
    "port": 61029,
    "stats": {
      "net_packets_total": 19,
      "net_bytes_total": 25137
    }
  },
  "last_time": "2018-01-18T10:18:41.470Z",
  "type": "flow",
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "beat": {
    "name": "lab247",
    "hostname": "lab247",
    "version": "6.1.1"
  },
  "final": false,
  "transport": "tcp",
  "dest": {
    "mac": "00:11:22:33:44:55",
    "ip": "xx.xx.xx.247",
    "port": 9200,
    "stats": {
      "net_packets_total": 4,
      "net_bytes_total": 854
    }
  },
  "start_time": "2018-01-18T10:18:40.969Z"
}
{
  "@timestamp": "2018-01-18T10:19:00.048Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "type": "flow",
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "transport": "tcp",
  "beat": {
    "name": "lab247",
    "hostname": "lab247",
    "version": "6.1.1"
  },
  "dest": {
    "stats": {
      "net_bytes_total": 2448,
      "net_packets_total": 11
    },
    "mac": "00:11:22:33:44:55",
    "ip": "xx.xx.xx.247",
    "port": 9200
  },
  "last_time": "2018-01-18T10:18:51.547Z",
  "final": false,
  "source": {
    "mac": "aa:bb:cc:dd:ee:ff",
    "ip": "xx.xx.xx.217",
    "port": 61029,
    "stats": {
      "net_packets_total": 53,
      "net_bytes_total": 70520
    }
  },
  "start_time": "2018-01-18T10:18:40.969Z"
}
{
  "@timestamp": "2018-01-18T10:19:10.043Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "final": false,
  "transport": "tcp",
  "source": {
    "stats": {
      "net_packets_total": 100,
      "net_bytes_total": 134536
    },
    "mac": "aa:bb:cc:dd:ee:ff",
    "ip": "xx.xx.xx.217",
    "port": 61029
  },
  "dest": {
    "port": 9200,
    "stats": {
      "net_packets_total": 28,
      "net_bytes_total": 4771
    },
    "mac": "00:11:22:33:44:55",
    "ip": "xx.xx.xx.247"
  },
  "start_time": "2018-01-18T10:18:40.969Z",
  "type": "flow",
  "beat": {
    "name": "lab247",
    "hostname": "lab247",
    "version": "6.1.1"
  },
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "last_time": "2018-01-18T10:19:01.520Z"
}
{
  "@timestamp": "2018-01-18T10:19:20.044Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "start_time": "2018-01-18T10:18:40.969Z",
  "last_time": "2018-01-18T10:19:11.515Z",
  "type": "flow",
  "final": false,
  "transport": "tcp",
  "dest": {
    "port": 9200,
    "stats": {
      "net_bytes_total": 6840,
      "net_packets_total": 42
    },
    "mac": "00:11:22:33:44:55",
    "ip": "xx.xx.xx.247"
  },
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "source": {
    "port": 61029,
    "stats": {
      "net_bytes_total": 188232,
      "net_packets_total": 140
    },
    "mac": "aa:bb:cc:dd:ee:ff",
    "ip": "xx.xx.xx.217"
  },
  "beat": {
    "hostname": "lab247",
    "version": "6.1.1",
    "name": "lab247"
  }
}
{
  "@timestamp": "2018-01-18T10:19:30.049Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "dest": {
    "ip": "xx.xx.xx.247",
    "port": 9200,
    "stats": {
      "net_packets_total": 59,
      "net_bytes_total": 9053
    },
    "mac": "00:11:22:33:44:55"
  },
  "type": "flow",
  "final": false,
  "beat": {
    "name": "lab247",
    "hostname": "lab247",
    "version": "6.1.1"
  },
  "source": {
    "mac": "aa:bb:cc:dd:ee:ff",
    "ip": "xx.xx.xx.217",
    "port": 61029,
    "stats": {
      "net_packets_total": 180,
      "net_bytes_total": 241928
    }
  },
  "start_time": "2018-01-18T10:18:40.969Z",
  "last_time": "2018-01-18T10:19:21.515Z",
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "transport": "tcp"
}
{
  "@timestamp": "2018-01-18T10:19:40.044Z",
  "@metadata": {
    "beat": "packetbeat",
    "type": "doc",
    "version": "6.1.1"
  },
  "flow_id": "EQQA////DP//////FP8BAAEADCnCS/4ADCnNNtFSYcn3UmHJ2fAjZe4",
  "beat": {
    "name": "lab247",
    "hostname": "lab247",
    "version": "6.1.1"
  },
  "source": {
    "ip": "xx.xx.xx.217",
    "port": 61029,
    "stats": {
      "net_packets_total": 220,
      "net_bytes_total": 295549
    },
    "mac": "aa:bb:cc:dd:ee:ff"
  },
  "start_time": "2018-01-18T10:18:40.969Z",
  "type": "flow",
  "last_time": "2018-01-18T10:19:31.519Z",
  "final": false,
  "transport": "tcp",
  "dest": {
    "port": 9200,
    "stats": {
      "net_packets_total": 74,
      "net_bytes_total": 11160
    },
    "mac": "00:11:22:33:44:55",
    "ip": "xx.xx.xx.247"
  }
}

While troubleshooting another case I saw an error similar to your original one. The problem was caused by an invalid index template. To fix the issue I did the following steps:

  1. Stop all Beats writing to the index.
  2. Deleted the current index.
  3. Delete the matching templates (for me it was DELETE _template/packetbeat* on dev console).
  4. Start the Beat.
  5. It will automatically install its index template.
  6. Then it will send data. And create a new index with a mapping based on the index template.

Deleted the current index.

How do I do that? Will that do?

curl -X DELETE xx.xx.xx.247:9200/packetbeat*

Update:
I deleted /packetbeat* and _template/packetbeat* and errors seem to have stopped. Thanks.

Earlier, when this started, I tried "DELETE xx.xx.xx.247:9200/*" but it didn't help. Does it mean the templates were not deleted?

Also, did you find what causes the template corruption? Is there a way for me to avoid that?

Right, that would delete the index, but not the template.

In the case I was investigating there was a mapping conflict. The index and the template were created with an older version of Beats. And upon upgrade to the new Beat version there was a mapping conflict for one of the fields (it used to be a scalar but was changed to an object in the new version). [This is one reason why since 6.0 Beats include the version number in the index name.]

Yes, I did upgrade from an earlier version and didn't delete old templates. I didn't get errors for other beats, or even when Packetbeat was sending data to localhost.

What else should I delete for a future update?

I think this is only a problem with 5.x to 6.x upgrades. https://www.elastic.co/guide/en/beats/libbeat/current/upgrading-5-to-6.html#upgrading-to-5.6

For updates between 6.x minor releases you shouldn't have any problems due to how we have renamed templates and indices to include version information.

I'm pretty sure I upgraded from 6.0.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.