Debug Unmatched Responses and tcp dropped because of gaps


(Manav Kapoor) #1

Hello.

I have configured Packetbeat and currently I have packetbeat flows going into a remote monitoring cluster. However, I am having trouble getting http logging sent to this same remote monitoring cluster and really appreciate some assistance.
These are my settings in the config file (I only included the parts that are important/relate to this issue directly.

packetbeat.interfaces.device: any
packetbeat.interfaces.bpf_filter: tcp port 9200

#========================== Transaction protocols =============================

packetbeat.protocols:
- type: http
  ports: [9200]

And I am receiving these logs:
{"level":"info","timestamp":"2018-10-09T23:05:27.615Z","logger":"monitoring","caller":"log/log.go:124","message":"Non-zero metrics in the last 30s","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":830,"time":830},"total":{"ticks":13890,"time":13890,"value":13890},"user":{"ticks":13060,"time":13060}},"info":{"ephemeral_id":"f168c886-7246-4eaa-9e5a-e627b12c18c8","uptime":{"ms":180011}},"memstats":{"gc_next":51770192,"memory_alloc":32270640,"memory_total":4809348680}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":283,"batches":7,"total":283},"read":{"bytes":63697},"write":{"bytes":185764}},"pipeline":{"clients":2,"events":{"active":0,"published":283,"total":283},"queue":{"acked":283}}},"system":{"load":{"1":0.08,"15":0.13,"5":0.11,"norm":{"1":0.005,"15":0.0081,"5":0.0069}}},"tcp":{"dropped_because_of_gaps":10},"xpack":{"monitoring":{"pipeline":{"events":{"published":3,"total":3},"queue":{"acked":3}}}}}}} {"level":"info","timestamp":"2018-10-09T23:05:57.616Z","logger":"monitoring","caller":"log/log.go:124","message":"Non-zero metrics in the last 30s","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":990,"time":990},"total":{"ticks":16460,"time":16469,"value":16460},"user":{"ticks":15470,"time":15479}},"info":{"ephemeral_id":"f168c886-7246-4eaa-9e5a-e627b12c18c8","uptime":{"ms":210011}},"memstats":{"gc_next":50617088,"memory_alloc":35928832,"memory_total":5735327272,"rss":1036288}},"http":{"unmatched_responses":6},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":276,"batches":7,"total":276},"read":{"bytes":62141},"write":{"bytes":181266}},"pipeline":{"clients":2,"events":{"active":0,"published":276,"total":276},"queue":{"acked":276}}},"system":{"load":{"1":0.21,"15":0.14,"5":0.13,"norm":{"1":0.0131,"15":0.0088,"5":0.0081}}},"tcp":{"dropped_because_of_gaps":2},"xpack":{"monitoring":{"pipeline":{"events":{"published":3,"total":3},"queue":{"acked":3}}}}}}}

Can I get some tips for debugging these? Thanks.


(Andrew Kroh) #2

If you are on Linux, then I'd try using af_packet. It's usually a little more efficient.

packetbeat.interfaces.device: eth0
packetbeat.interfaces.snaplen: 1514
packetbeat.interfaces.type: af_packet
packetbeat.interfaces.buffer_size_mb: 100

If you are just wanting http monitoring then I would disable the flows monitoring too. This will remove the need to use the custom bpf filter since the generated one then be the same as you custom one (tcp port 9200).

packetbeat.flows.enabled: false

(Manav Kapoor) #3

I have used your suggestions and now tcp.dropped_because_of_gaps is a very large number. Http.unmatched_responses also is high.

{"level":"info","timestamp":"2018-10-10T17:42:36.202Z","logger":"monitoring","caller":"log/log.go:124","message":"Non-zero metrics in the last 30s","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":160,"time":165},"total":{"ticks":2110,"time":2120,"value":2110},"user":{"ticks":1950,"time":1955}},"info":{"ephemeral_id":"e95515d3-b8af-49b4-ad76-51ba9186dc4d","uptime":{"ms":60009}},"memstats":{"gc_next":41192944,"memory_alloc":27027264,"memory_total":531861152,"rss":528384}},"http":{"unmatched_responses":63},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"system":{"load":{"1":0.12,"15":0.12,"5":0.1,"norm":{"1":0.0075,"15":0.0075,"5":0.0063}}},"tcp":{"dropped_because_of_gaps":9198},"xpack":{"monitoring":{"pipeline":{"events":{"published":3,"total":3},"queue":{"acked":3}}}}}}}
{"level":"info","timestamp":"2018-10-10T17:43:06.202Z","logger":"monitoring","caller":"log/log.go:124","message":"Non-zero metrics in the last 30s","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":210,"time":217},"total":{"ticks":3350,"time":3358,"value":3350},"user":{"ticks":3140,"time":3141}},"info":{"ephemeral_id":"e95515d3-b8af-49b4-ad76-51ba9186dc4d","uptime":{"ms":90009}},"memstats":{"gc_next":40469584,"memory_alloc":30402272,"memory_total":887802872}},"http":{"unmatched_responses":2},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"system":{"load":{"1":0.18,"15":0.13,"5":0.12,"norm":{"1":0.0113,"15":0.0081,"5":0.0075}}},"tcp":{"dropped_because_of_gaps":11916},"xpack":{"monitoring":{"pipeline":{"events":{"published":3,"total":3},"queue":{"acked":3}}}}}}}
{"level":"info","timestamp":"2018-10-10T17:43:36.202Z","logger":"monitoring","caller":"log/log.go:124","message":"Non-zero metrics in the last 30s","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":270,"time":272},"total":{"ticks":4330,"time":4336,"value":4330},"user":{"ticks":4060,"time":4064}},"info":{"ephemeral_id":"e95515d3-b8af-49b4-ad76-51ba9186dc4d","uptime":{"ms":120009}},"memstats":{"gc_next":41075264,"memory_alloc":20936872,"memory_total":1153167968,"rss":720896}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"system":{"load":{"1":0.25,"15":0.14,"5":0.14,"norm":{"1":0.0156,"15":0.0088,"5":0.0088}}},"tcp":{"dropped_because_of_gaps":8478},"xpack":{"monitoring":{"pipeline":{"events":{"published":3,"total":3},"queue":{"acked":3}}}}}}}

What can be the cause of this?


(Andrew Kroh) #4

Can you try adding back your custom BPF filter to the config.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.