Getting packetbeat to pull MySQL data


(Shaun Arman) #1

Hi all, and thanks in advance. I am having issues getting packetbeat to pull some MySQL data.
I have reviewed several different threads related to this same type of issue, but none have quit been like mine, or have helped me.
In the default packetbeat dashboard I can get the following items working:

Mysql response times percentiles
MySQL throughput

But that is the end of it. I cant get query data, read/write or anything else.
Here is what I show for my available devices:

0: bond0 (No description available) (10.50.11.51 10.50.11.50 fe80::1a66:daff:fefc:d221)
1: eth1 (No description available) (Not assigned ip address)
2: eth2 (No description available) (Not assigned ip address)
3: any (Pseudo-device that captures on all interfaces) (Not assigned ip address)
4: lo (No description available) (127.0.0.1 127.0.0.2 ::1)
5: eth0 (No description available) (169.254.0.1)


(Shaun Arman) #2

Here is my packetbeat.yml:

packetbeat.interfaces.device: bond0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
packetbeat.interfaces.device: lo
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
packetbeat.flows:
  timeout: 30s
  period: 10s
packetbeat.protocols:
- type: icmp
  enabled: true
- type: amqp
  ports: [5672]
- type: cassandra
  ports: [9042]
- type: dns
  ports: [53]
  include_authorities: false
  include_additionals: false
- type: http
  ports: [80, 8080, 8000, 5000, 8002]
  split_cookie: true
- type: memcache
  ports: [11211]
- type: mysql
  ports: [3306]
- type: pgsql
  ports: [5432]
- type: redis
  ports: [6379]
- type: thrift
  ports: [9090]
- type: mongodb
  ports: [27017]
- type: nfs
  ports: [2049]
- type: tls
  ports: [443]
packetbeat.procs:
  enabled: true
  monitored:
    - process: mysqld
      cmdline_grep: mysqld
    - process: snmpd
      cdmline_grep: snmp
    - process: jsvc
      cmdline_grep: jsvc
    - process: java
      cmdline_grep: java
    - process: dhcpd
      cmdline_grep: dhcpd
    - process: httpd2
      cmdline_grep: httpd2
setup.template.settings:
  index.number_of_shards: 3
name: KibanaServer
tags: [dev11]
fields:
  env: nonprod
setup.kibana:
  host: "10.7.6.21:5601"
  username: "elastic"
  password: "***********************"
output.logstash:
  hosts: ["10.7.6.21:5044"]

(Shaun Arman) #3

Regardless of any MySQL activity nothing populates in Kibana dashboard.
In my logs, all I see is this:

[root] 513 packetbeat -e -d "publish" -N
2017/12/31 04:27:00.318259 beat.go:436: INFO Home path: [/usr/share/packetbeat] Config path: [/etc/packetbeat] Data path: [/var/lib/packetbeat] Logs path: [/var/log/packetbeat]
2017/12/31 04:27:00.318299 metrics.go:23: INFO Metrics logging every 30s
2017/12/31 04:27:00.318345 beat.go:443: INFO Beat UUID: cc8474d4-a14c-407e-8968-02bf1f10b76d
2017/12/31 04:27:00.318358 beat.go:203: INFO Setup Beat: packetbeat; Version: 6.1.0
2017/12/31 04:27:00.318391 module.go:37: INFO Dry run mode. All output types except the file based one are disabled.
2017/12/31 04:27:00.319788 logger.go:18: DBG [publish] start pipeline event consumer
2017/12/31 04:27:00.319913 module.go:76: INFO Beat name: node1.cluster1.dev11
2017/12/31 04:27:00.320409 procs.go:75: INFO Process matching enabled
2017/12/31 04:27:00.323292 beat.go:276: INFO packetbeat start running.
2017/12/31 04:27:30.318632 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30001 beat.memstats.gc_next=37537856 beat.memstats.memory_alloc=29247136 beat.memstats.memory_total=132394184 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:28:00.318622 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=37458032 beat.memstats.memory_alloc=25372000 beat.memstats.memory_total=246417040 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:28:30.318594 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=38024608 beat.memstats.memory_alloc=21117520 beat.memstats.memory_total=360323080 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:29:00.318607 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=37834112 beat.memstats.memory_alloc=32378032 beat.memstats.memory_total=474285120 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:29:30.318594 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=38026384 beat.memstats.memory_alloc=27174992 beat.memstats.memory_total=588218456 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:30:00.318600 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=37540176 beat.memstats.memory_alloc=21772816 beat.memstats.memory_total=702117504 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
^C2017/12/31 04:30:17.340505 packetbeat.go:218: INFO Packetbeat send stop signal
2017/12/31 04:30:30.318607 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=37500560 beat.memstats.memory_alloc=34090024 beat.memstats.memory_total=816019216 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0
2017/12/31 04:31:00.318595 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=37876032 beat.memstats.memory_alloc=28581752 beat.memstats.memory_total=929967880 libbeat.config.module.running=0 libbeat.pipeline.clients=14 libbeat.pipeline.events.active=0

If I am missing key data to add please let me know and I will. I am very new to this and trying to get a Proof of COncept up and running as I really want my management to like what I have to show them.


(Shaun Arman) #4

I do also get this error on the dashboard:


(Shaun Arman) #5

If I do a search for "select" in packetbeats I do get the following data returned:

{
  "_index": "packetbeat-2017.12.31",
  "_type": "doc",
  "_id": "oC_VqmAB5wR3i1fzfKBw",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2017-12-31T04:30:05.478Z",
    "@version": "1",
    "responsetime": 0,
    "path": "e911.historical0_",
    "ip": "127.0.0.1",
    "client_proc": "",
    "proc": "",
    "bytes_in": 1510,
    "method": "SELECT",
    "tags": [
      "DEV11",
      "beats_input_raw_event"
    ],
    "bytes_out": 1704,
    "client_ip": "127.0.0.1",
    "status": "OK",
    "query": "select distinct historical0_.id as id1_19_, historical0_.clusterUUID as clusterU2_19_, historical0_.createdOn as createdO3_19_, historical0_.displayId as displayI4_19_, historical0_.incidentID as inciden23_19_, historical0_.lastRecommendSeq as lastReco5_19_, historical0_.locationAddress1 as location6_19_, historical0_.locationAddress2 as location7_19_, historical0_.locationCity as location8_19_, historical0_.locationCountry as location9_19_, historical0_.locationCounty as locatio10_19_, historical0_.locationHash as locatio11_19_, historical0_.locationState as locatio12_19_, historical0_.locationX as locatio13_19_, historical0_.locationY as locatio14_19_, historical0_.locationZip as locatio15_19_, historical0_.ppvDescription as ppvDesc16_19_, historical0_.ppvId as ppvId17_19_, historical0_.requestCategoryUUID as request18_19_, historical0_.requestProviderUUID as request19_19_, historical0_.status as status20_19_, historical0_.updatedOn as updated21_19_, historical0_.username as usernam22_19_ from Request historical0_ inner join Request_DispatchGroup dispatchgr1_ on historical0_.id=dispatchgr1_.request where historical0_.createdOn>='2017-12-29 00:00:00.471' and historical0_.createdOn<='2017-12-30 22:30:05.393' and (dispatchgr1_.dispatchGroupName in ('dispatchGroup1' , 'dispatchGroup10' , 'dispatchGroup2' , 'dispatchGroup3' , 'dispatchGroup4' , 'dispatchGroup5' , 'dispatchGroup6' , 'dispatchGroup7' , 'dispatchGroup8' , 'dispatchGroup9')) order by historical0_.createdOn desc limit 100",
    "client_port": 57415,
    "beat": {
      "version": "6.1.0",
      "name": "node1.cluster2.dev11",
      "hostname": "node1.cluster2.dev11.clj2.com"
    },
    "port": 3306,
    "server": "node1.cluster2.dev11",
    "client_server": "node1.cluster2.dev11",
    "type": "mysql",
    "mysql": {
      "num_fields": 23,
      "error_message": "",
      "affected_rows": 0,
      "iserror": false,
      "num_rows": 0,
      "error_code": 0,
      "insert_id": 0
    },
    "host": "node1.cluster2.dev11.clj2.com"
  },
  "fields": {
    "@timestamp": [
      "2017-12-31T04:30:05.478Z"
    ]
  },
  "highlight": {
    "method": [
      "@kibana-highlighted-field@SELECT@/kibana-highlighted-field@"
    ],
    "query": [
      "@kibana-highlighted-field@select@/kibana-highlighted-field@ distinct historical0_.id as id1_19_, historical0_.clusterUUID as clusterU2_19_, historical0_.createdOn as createdO3_19_, historical0_.displayId as displayI4_19_, historical0_.incidentID as inciden23_19_, historical0_.lastRecommendSeq as lastReco5_19_, historical0_.locationAddress1 as location6_19_, historical0_.locationAddress2 as location7_19_, historical0_.locationCity as location8_19_, historical0_.locationCountry as location9_19_, historical0_.locationCounty as locatio10_19_, historical0_.locationHash as locatio11_19_, historical0_.locationState as locatio12_19_, historical0_.locationX as locatio13_19_, historical0_.locationY as locatio14_19_, historical0_.locationZip as locatio15_19_, historical0_.ppvDescription as ppvDesc16_19_, historical0_.ppvId as ppvId17_19_, historical0_.requestCategoryUUID as request18_19_, historical0_.requestProviderUUID as request19_19_, historical0_.status as status20_19_, historical0_.updatedOn as updated21_19_, historical0_.username as usernam22_19_ from Request historical0_ inner join Request_DispatchGroup dispatchgr1_ on historical0_.id=dispatchgr1_.request where historical0_.createdOn>='2017-12-29 00:00:00.471' and historical0_.createdOn<='2017-12-30 22:30:05.393' and (dispatchgr1_.dispatchGroupName in ('dispatchGroup1' , 'dispatchGroup10' , 'dispatchGroup2' , 'dispatchGroup3' , 'dispatchGroup4' , 'dispatchGroup5' , 'dispatchGroup6' , 'dispatchGroup7' , 'dispatchGroup8' , 'dispatchGroup9')) order by historical0_.createdOn desc limit 100"
    ]
  },
  "sort": [
    1514694605478
  ]
}

So this proovs that MySQL data is making it into packetbeats, right?


(Steffen Siering) #6
  • To date, you can not configure multiple devices in packetbeat. That is, this config:
packetbeat.interfaces.device: bond0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
packetbeat.interfaces.device: lo
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100

basically becomes this:

packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100
  • using the mysql client on CLI (or even some libs), an unix socket will be used. packetbeat requires the client to use TCP.

  • mysql server normally requires all connections to use TLS. packetbeat can not process encrypted connections

  • prepared statements are not supported, yet. See related PR: https://github.com/elastic/beats/pull/5663

  • yes, the document is a MySQL transaction capture by packetbeat. Is your dashboard working now? I have no idea what this kibana error message is about. Which versions are you using?


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.