When trying to start packetbeat on the host servers, a new packetbeat index is not getting created.
This is what i get at the host logs
INFO Connected to Elasticsearch version 5.5.0
INFO Trying to load template for client: http://x.x.x.x:9200
INFO Connected to Elasticsearch version 5.5.0
INFO Template already exists and will not be overwritten.
i did the curl to get the indices using "_cat/indices?v&pretty" and i don't see the packetbeat index there.
On the client servers, which is going to send logs to ELK, this is what i see
INFO packet decode failed with: TCP data offset greater than packet length
INFO No non-zero metrics in the last 30s
INFO No non-zero metrics in the last 30s
INFO No non-zero metrics in the last 30s
INFO packet decode failed with: TCP data offset greater than packet length
The ELK is configured in cluster state, and the packetbeat configuration has below output config
output:
elasticsearch:
hosts: ["1.1.1.1:9200", "2.2.2.2:9200", "3.3.3.3:9200"]
can you share complete packetbeat config and logs (debug mode enabled)?
please use the </> button (editor toolbar ) to format configurations and logs, so they are properly formatted and readable.
The TCP data offset greater then packet length message indicates the TCP-stream reconstruction is complaining about an invalid TCP packet length or insufficient packet capture length.
Here is my packetbeat configuration (adding just the lines i have changed from default config of packetbeat 5.5)
output:
Elasticsearch as output
elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["kibana0.xyz:9200", "kibana1.xyz:9200", "kibana2.xyz:9200"]
logging.level: debug
Here is the log output i have got upon restart (debug enabled)
2017-07-13T15:34:30Z INFO Home path: [/usr/share/packetbeat] Config path: [/etc/packetbeat] Data path: [/var/lib/packetbeat] Logs path: [/var/log/packetbeat]
2017-07-13T15:34:30Z INFO Setup Beat: packetbeat; Version: 5.4.2
2017-07-13T15:34:30Z DBG Processors:
2017-07-13T15:34:30Z DBG Initializing output plugins
2017-07-13T15:34:30Z INFO Loading template enabled. Reading template file: /etc/packetbeat/packetbeat.template.json
2017-07-13T15:34:30Z INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /etc/packetbeat/packetbeat.template-es2x.json
2017-07-13T15:34:30Z INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /etc/packetbeat/packetbeat.template-es6x.json
2017-07-13T15:34:30Z INFO Elasticsearch url: http://kibana0.xyz:9200
2017-07-13T15:34:30Z INFO Elasticsearch url: http://kibana1.xyz:9200
2017-07-13T15:34:30Z INFO Elasticsearch url: http://kibana2.xyz:9200
2017-07-13T15:34:30Z DBG configure maxattempts: 4
2017-07-13T15:34:30Z DBG load balancer: start client loop
2017-07-13T15:34:30Z DBG load balancer: start client loop
2017-07-13T15:34:30Z DBG ES Ping(url=http://kibana2.xyz:9200, timeout=1m30s)
2017-07-13T15:34:30Z DBG load balancer: start client loop
2017-07-13T15:34:30Z DBG ES Ping(url=http://kibana0.xyz:9200, timeout=1m30s)
2017-07-13T15:34:30Z DBG ES Ping(url=http://kibana1.xyz:9200, timeout=1m30s)
2017-07-13T15:34:30Z INFO Activated elasticsearch as output plugin.
2017-07-13T15:34:30Z DBG Create output worker
2017-07-13T15:34:30Z DBG No output is defined to store the topology. The server fields might not be filled.
2017-07-13T15:34:30Z INFO Publisher name: hostname
2017-07-13T15:34:30Z INFO Flush Interval set to: 1s
2017-07-13T15:34:30Z INFO Max Bulk Size set to: 50
2017-07-13T15:34:30Z DBG create bulk processing worker (interval=1s, bulk size=50)
2017-07-13T15:34:30Z INFO Process matching disabled
2017-07-13T15:34:30Z DBG Initializing protocol plugins
2017-07-13T15:34:30Z INFO registered protocol plugin: amqp
2017-07-13T15:34:30Z INFO registered protocol plugin: cassandra
2017-07-13T15:34:30Z INFO registered protocol plugin: dns
2017-07-13T15:34:30Z INFO registered protocol plugin: memcache
2017-07-13T15:34:30Z INFO registered protocol plugin: mysql
2017-07-13T15:34:30Z INFO registered protocol plugin: nfs
2017-07-13T15:34:30Z INFO registered protocol plugin: redis
2017-07-13T15:34:30Z INFO registered protocol plugin: http
2017-07-13T15:34:30Z INFO registered protocol plugin: mongodb
2017-07-13T15:34:30Z INFO registered protocol plugin: pgsql
2017-07-13T15:34:30Z INFO registered protocol plugin: thrift
2017-07-13T15:34:30Z DBG Initializing sniffer
2017-07-13T15:34:30Z DBG BPF filter: ''
2017-07-13T15:34:30Z DBG Sniffer type: pcap device: any
2017-07-13T15:34:30Z DBG Ping status code: 200
2017-07-13T15:34:30Z INFO Connected to Elasticsearch version 5.5.0
2017-07-13T15:34:30Z INFO Trying to load template for client: http://kibana0.xyz:9200
2017-07-13T15:34:30Z DBG HEAD http://kibana0.xyz:9200/_template/packetbeat <nil>
2017-07-13T15:34:30Z DBG Ping status code: 200
2017-07-13T15:34:30Z INFO Connected to Elasticsearch version 5.5.0
2017-07-13T15:34:30Z DBG Ping status code: 200
2017-07-13T15:34:30Z INFO Connected to Elasticsearch version 5.5.0
2017-07-13T15:34:30Z INFO Template already exists and will not be overwritten.
2017-07-13T15:34:30Z INFO Trying to load template for client: http://kibana1.xyz:9200
2017-07-13T15:34:30Z DBG HEAD http://kibana1.xyz:9200/_template/packetbeat <nil>
2017-07-13T15:34:30Z INFO Template already exists and will not be overwritten.
2017-07-13T15:34:30Z INFO Trying to load template for client: http://kibana2.xyz:9200
2017-07-13T15:34:30Z DBG HEAD http://kibana2.xyz:9200/_template/packetbeat <nil>
2017-07-13T15:34:30Z DBG tcp%!(EXTRA string=Port map: %v, map[uint16]protos.Protocol=map[])
2017-07-13T15:34:30Z DBG Port map: map[]
2017-07-13T15:34:30Z DBG Layer type: Linux SLL
2017-07-13T15:34:30Z INFO packetbeat start running.
2017-07-13T15:34:30Z DBG Waiting for the sniffer to finish
2017-07-13T15:34:30Z DBG Packet number: 1
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Packet number: 2
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
2017-07-13T15:34:30Z DBG Packet number: 3
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
2017-07-13T15:34:30Z DBG Packet number: 4
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z INFO Template already exists and will not be overwritten.
2017-07-13T15:34:30Z DBG Packet number: 5
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Packet number: 6
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
2017-07-13T15:34:30Z DBG Packet number: 7
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
2017-07-13T15:34:30Z DBG Packet number: 8
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
2017-07-13T15:34:30Z DBG Packet number: 9
2017-07-13T15:34:30Z DBG decode packet data
2017-07-13T15:34:30Z DBG IPv4 packet
2017-07-13T15:34:30Z DBG TCP packet
2017-07-13T15:34:30Z DBG Ignore empty non-FIN packet
Consider changing the sniffer type from pcap to af_packet + adjust the snaplen for the sniffer to 9004 bytes.
You can try to create a raw network trace via tcpdump -w test.pcap -i .... This trace can be replayed with packetbeat by starting packetbeat with -I test.pcap. Having a capture we maybe have a chance to see what's going on.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.