Using packetbeat to extract user's IP

Hello,

I'm trying to get Packetbeat to add the user's IP address to the incoming payload, and then submit that to ES. I'm running ES and I can see the data that's being submitted (it's all correct). I then ran Packetbeat, config'ed it, and ran it. Subsequently, the data submitted does not include the user's IP address. I connected with Kibana to make sure I wasn't missing anything. The user's IP is not included.

Is this even possible? Do I have to use Logstash between ES and Packetbeat?

My app submits HTTP requests to http://10.223.24.180:9200 (local host).

One thing that I'm confused about is: should my app submit to the ES port? Or should it submit to an HTTP port? Which Packetbeat listens to, then sends that data to the ES port?

My ES config is only this:
network.host: 10.223.24.180
http.port: 9200

My Packetbeat config:
packetbeat.interfaces.device: 0
packetbeat.interfaces.with_vlans: true
packetbeat.interfaces.type: pcap

packetbeat.flows:
timeout: 30s
period: 10s
enabled: false

packetbeat.protocols.icmp:
enabled: false

packetbeat.protocols.amqp:
ports: [5672]
enabled: false

packetbeat.protocols.cassandra:
ports: [9042]
enabled: false

packetbeat.protocols.dns:
ports: [53]
enabled: false
include_authorities: true
include_additionals: true

packetbeat.protocols.http:
ports: [80, 8080, 8081, 5000, 8002]
send_request : true
send_response : true
send_all_headers: true
include_body_for: ["text/html", "application/json"]

packetbeat.protocols.memcache:
ports: [11211]
enabled: false

packetbeat.protocols.mysql:
ports: [3306]
enabled: false

packetbeat.protocols.pgsql:
ports: [5432]
enabled: false

packetbeat.protocols.redis:
ports: [6379]
enabled: false

packetbeat.protocols.thrift:
ports: [9090]
enabled: false

packetbeat.protocols.mongodb:
ports: [27017]
enabled: false

packetbeat.protocols.nfs:
ports: [2049]
enabled: false

output.file:
path: "/tmp/packetbeat"
filename: packetbeat

output.elasticsearch:
hosts: ["http://10.223.24.180:9200"]
protocol: "http"
processors:

  • include_fields:
    fields:
    - ip
    - client_ip
    path: "/elasticsearch"
    template.enabled: true
    template.path: "packetbeat.template-es2x.json"
    template.overwrite: false

This is the debug output for Packetbeat, as it starts:
2017/02/15 18:39:41.292095 beat.go:267: INFO Home path: [C:\Program Files\packetbeat] Config path: [C:\Program Files\packetbeat] Data path: [C:\Program Files\packetbeat\data] Logs path: [C:\Program Files\packetbeat\logs]
2017/02/15 18:39:41.292095 beat.go:177: INFO Setup Beat: packetbeat; Version: 5.2.0
2017/02/15 18:39:41.292095 logp.go:219: INFO Metrics logging every 30s
2017/02/15 18:39:41.292095 file.go:45: INFO File output path set to: /tmp/packetbeat
2017/02/15 18:39:41.292095 file.go:46: INFO File output base filename set to: packetbeat
2017/02/15 18:39:41.292095 file.go:49: INFO Rotate every bytes set to: 10485760
2017/02/15 18:39:41.292095 file.go:53: INFO Number of files set to: 7
2017/02/15 18:39:41.292095 outputs.go:106: INFO Activated file as output plugin.
2017/02/15 18:39:41.292095 output.go:167: INFO Loading template enabled. Reading template file: C:\Program Files\packetbeat\packetbeat.template-es2x.json
2017/02/15 18:39:41.293095 output.go:178: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: C:\Program Files\packetbeat\packetbeat.template-es2x.json
2017/02/15 18:39:41.294096 client.go:120: INFO Elasticsearch url: http://10.223.24.180:9200/elasticsearch
2017/02/15 18:39:41.294096 outputs.go:106: INFO Activated elasticsearch as output plugin.
2017/02/15 18:39:41.294096 publish.go:234: DBG Create output worker
2017/02/15 18:39:41.295096 publish.go:234: DBG Create output worker
2017/02/15 18:39:41.295096 publish.go:276: DBG No output is defined to store the topology. The server fields might not be filled.
2017/02/15 18:39:41.295096 publish.go:291: INFO Publisher name: bobpur-3358
2017/02/15 18:39:41.297098 async.go:63: INFO Flush Interval set to: -1s
2017/02/15 18:39:41.297098 async.go:64: INFO Max Bulk Size set to: -1
2017/02/15 18:39:41.297098 async.go:63: INFO Flush Interval set to: 1s
2017/02/15 18:39:41.297098 async.go:64: INFO Max Bulk Size set to: 50
2017/02/15 18:39:41.297098 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=50)
2017/02/15 18:39:41.297098 procs.go:79: INFO Process matching disabled
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: cassandra
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: mysql
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: thrift
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: pgsql
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: redis
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: amqp
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: dns
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: http
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: memcache
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: mongodb
2017/02/15 18:39:41.298098 protos.go:89: INFO registered protocol plugin: nfs
2017/02/15 18:39:41.298098 protos.go:111: INFO Protocol plugin 'redis' disabled by config
2017/02/15 18:39:41.298098 protos.go:111: INFO Protocol plugin 'cassandra' disabled by config
2017/02/15 18:39:41.298098 protos.go:111: INFO Protocol plugin 'pgsql' disabled by config
2017/02/15 18:39:41.298098 protos.go:111: INFO Protocol plugin 'mysql' disabled by config
2017/02/15 18:39:41.298098 protos.go:111: INFO Protocol plugin 'nfs' disabled by config
2017/02/15 18:39:41.299099 protos.go:111: INFO Protocol plugin 'memcache' disabled by config
2017/02/15 18:39:41.299099 protos.go:111: INFO Protocol plugin 'thrift' disabled by config
2017/02/15 18:39:41.300099 protos.go:111: INFO Protocol plugin 'amqp' disabled by config
2017/02/15 18:39:41.300099 protos.go:111: INFO Protocol plugin 'dns' disabled by config
2017/02/15 18:39:41.300099 protos.go:111: INFO Protocol plugin 'mongodb' disabled by config
2017/02/15 18:39:41.301100 sniffer.go:270: DBG BPF filter: 'tcp port 80 or tcp port 8080 or tcp port 8081 or tcp port 5000 or tcp port 8002'
2017/02/15 18:39:41.313109 sniffer.go:145: INFO Resolved device index 0 to device: \Device\NPF_{0986E0A4-99DD-4410-9A76-B4E9F2AA8AE1}
2017/02/15 18:39:41.314109 sniffer.go:156: DBG Sniffer type: pcap device: \Device\NPF_{0986E0A4-99DD-4410-9A76-B4E9F2AA8AE1}
2017/02/15 18:39:41.318113 beat.go:207: INFO packetbeat start running.
2017/02/15 18:39:41.818828 sniffer.go:322: DBG Interrupted

And it continues with the last line, until I stop it. With the occasional
2017/02/15 18:40:11.292374 logp.go:232: INFO No non-zero metrics in the last 30s

Thank you for reading.

What are you trying to monitor with Packetbeat?

Packetbeat adds the client's IP as the client_ip field in each event that is sends. So if for example you are monitoring traffic to a listening web server then this is where the you will find the client IP.

Packetbeat is a passive monitor. It does not open any listening sockets nor does it proxy any traffic. So if your app needs to communicate to Elasticsearch then you should make it connect to port 9200 on the Elasticsearch server.

I'm really only interested in the user's IP address. So, I don't really need the monitoring aspect.

I guess this is where I'm having troubles. If Packetbeat is processing the incoming packets, I'm not seeing the end result.

Ah, I see. Thanks for that.

And thank you for taking the time to reply.

With that information, I guess that means my ES config is fine (because I'm seeing the expected data).

Maybe the data is there, but, I'm not querying for it properly? Because I didn't specify an index, Packetbeat should be using packetbeat-2017.02.15 as the index (because I have records from yesterday). If I search ES for that index, I'm not getting any results back (using Postman). In Kibana, when I select Discover, and enter packetbeat-*, 0 results are returned.

So if I understand correctly you have some client application(s) that communicate to Elasticsearch and you want to log their IP addresses?

Try configuring Packetbeat to monitor traffic on port 9200.

packetbeat.protocols.http:
  ports: [9200]

This isn't going to enrich the data coming from your application that goes into Elasticsearch. You will have to correlate the two events to determine the client IP so you may want to record more information from the request than just client_ip.

An alternative would be custom proxy between your app and ES that adds this info to the data before indexing it.

Thanks Andrew. Yes, that is exactly what I'm trying to do.
Hmmm ... I added that to my packetbeat config.

packetbeat.protocols.http:
  #ports: [80, 8080, 8081, 5000, 8002]
  ports: [9200]

I'm not seeing anything different, but, I guess that's because ...
It won't enrich the data? I see. That's what I've been expecting. :smiley: From reading the docs, I got the impression that is what would happen, when these lines are added to my Packetbeat config:

output.elasticsearch:
  processors:
  - include_fields:
      fields:
        - ip
        - client_ip

The data that comes from my app creates a unique _id. (as well as being stored in the data, as SessionID). So, I think that'll be enough to correlate data from packetbeat, to data in ES.

Awesome, thank you for that suggestion.

Do you know if Logstash could enrich the data if I use that instead of Packetbeat?

Why do you have output.elasticsearch.path: "/elasticsearch" configured? Is there a reverse proxy in front of ES? If not, remove that.

Are you sure that you are listening on the correct device? Use the command below to see what device "0" maps to?

.\packetbeat.exe -devices

Try running packetbeat on the CLI with the following command and simplified configuration.

.\packetbeat.exe -c packetbeat.yml -e -d "*"

packetbeat.interfaces.device: 0
packetbeat.interfaces.with_vlans: true

packetbeat.protocols.http:
  ports: [9200]
  send_request : true
  send_response : true
  send_all_headers: true
  include_body_for: ["text/html", "application/json"]

output.elasticsearch:
  hosts: ["http://10.223.24.180:9200"]

While it's running make a simple HTTP GET request to the ES cluster. You should see some JSON output from Packetbeat after it sees the HTTP response back from ES.

After you have that working then you can check the ES index to see if any events from Packetbeat were written. Use a GET packetbeat-*/_count request from the Kibana console to see how many events are indexed in the packetbeat indicies. Or use curl -XGET http://elasticsearch:9200/packetbeat-*/_count.

Then with that working you can add back your include_fields processor to trim down the data.

This is awesome. Thank you Andrew.
I will follow these steps and reply back.
Thanks!

Removed /elasticsearch.
I tried numerous things before posting, and that was one of them. I'm actually not even sure about the vlans setting, but, was trying that, too.

Running .\packetbeat.exe -devices returned 1 device.
I checked that against what's listed in the Device Manager and it matches my network adapter.

I started packetbeat with the minimal config. I didn't see any output after making a GET request. I did a few (through my app, and through postman).

That didn't work, but, I went to the kibana console anyway. Running the count query returned:

{
  "count": 0,
  "_shards": {
    "total": 0,
    "successful": 0,
    "failed": 0
  }
}

But, that's probably because I didn't see any output.

However, I did find something interesting. I'm currently running ES locally, for testing purposes. I have another version of ES running on a different server, that is doing the same thing (receiving communications from my app). I wanted to get Packetbeat running, before running it on the production server.
I did a simple http GET on the remote ES version, and I did see output, while packetbeat was running. I went to kibana and ran the count query you suggested. That returned values. So, I went to the Discover tab, and that returned results as well. I looked through the data, and it contains the IP info that I was expecting.

Whew! Ok, so I guess this means I've made an error setting up packetbeat to run locally? I thought I read something about a loopback issue. Maybe that's what I running into.

If you were making requests to the localhost over loopback then Packetbeat would not see that traffic unless you configured it to monitor the loopback device. And in order to monitor the loopback device you need to the npcap driver instead of winpcap because it supports capturing packets on loopback.

Very cool. I will download that and give it a go.
Thank you very much for your help Andrew. I really appreciate it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.