Packetbeat crashes

Hi All,

I have an issue with packetbeat... I installed packetbeat and wanted to monitor my mysql traffic. From the moment packetbeat records a transaction. It crashed and stopped the service. No transactions where made.
So I went on for the moment and installed packetbeat on a webserver. Here the same thing happens with the same main error: panic: runtime error: index out of range

The rest of the output (also the same):
goroutine 34 [running]:
github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP.(*GeoIP).lookupByIPNum(0xc821156990, 0xc80ace0379, 0xc82137fa78)
/go/src/github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP/libgeo.go:295 +0x29e
github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP.(*GeoIP).GetLocationByIPNum(0xc821156990, 0xace0379, 0xc80ace0379)
/go/src/github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP/libgeo.go:211 +0x36
github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP.(*GeoIP).GetLocationByIP(0xc821156990, 0xc82114ecd0, 0xc, 0xc82137fa68)
/go/src/github.com/elastic/beats/vendor/github.com/nranchev/go-libGeoIP/libgeo.go:205 +0x41
github.com/elastic/beats/libbeat/publisher.updateEventAddresses(0xef9660, 0xc8212846f0, 0x9f5630)
/go/src/github.com/elastic/beats/libbeat/publisher/preprocess.go:174 +0xa26
github.com/elastic/beats/libbeat/publisher.(*preprocessor).onMessage(0xc821158ca0, 0xc820010000, 0x0, 0x0, 0xc8212846f0, 0x0, 0x0, 0x0)
/go/src/github.com/elastic/beats/libbeat/publisher/preprocess.go:49 +0xb60
github.com/elastic/beats/libbeat/publisher.(*messageWorker).run(0xc820016d80)
/go/src/github.com/elastic/beats/libbeat/publisher/worker.go:57 +0x220
created by github.com/elastic/beats/libbeat/publisher.(*messageWorker).init
/go/src/github.com/elastic/beats/libbeat/publisher/worker.go:47 +0xdf

goroutine 1 [chan receive]:
main.(*Packetbeat).Run(0xc8204a4480, 0xc821280000, 0x0, 0x0)
/go/src/github.com/elastic/beats/packetbeat/packetbeat.go:227 +0xbe
github.com/elastic/beats/libbeat/beat.(*Beat).Run(0xc821280000)
/go/src/github.com/elastic/beats/libbeat/beat/beat.go:136 +0x31c
main.main()
/go/src/github.com/elastic/beats/packetbeat/main.go:40 +0x3b1

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1

goroutine 5 [syscall]:
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
/usr/local/go/src/os/signal/signal_unix.go:28 +0x37

goroutine 35 [select]:
github.com/elastic/beats/libbeat/publisher.(*messageWorker).run(0xc821156ba0)
/go/src/github.com/elastic/beats/libbeat/publisher/worker.go:53 +0x245
created by github.com/elastic/beats/libbeat/publisher.(*messageWorker).init
/go/src/github.com/elastic/beats/libbeat/publisher/worker.go:47 +0xdf

goroutine 36 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc820056d40, 0xc820056d00)
/go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x102
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
/go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x91

goroutine 37 [select]:
github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor.func1(0xc820056dc0, 0xc820056d80)
/go/src/github.com/elastic/beats/libbeat/common/cache.go:219 +0x102
created by github.com/elastic/beats/libbeat/common.(*Cache).StartJanitor
/go/src/github.com/elastic/beats/libbeat/common/cache.go:227 +0x91
...

Any idea?

Which version/package did you install? Which OS are you on?

Sorry... Indeed essential info:
On the VM:
Distribution: Centos 6.7 (on all vms)
beats (on all vms):
filebeat-1.0.1-1.x86_64
topbeat-1.0.1-1.x86_64
packetbeat-1.0.1-1.x86_64

On the Elasticsearch server (there are 2 of them 8 GB ram/ 4 CPU)
elasticsearch-2.1.1-1.noarch

My logstash hosts (3):
logstash-2.1.1-1.noarch

Kibana host:
kibana-4.3.1-1.x86_64

Packetbeat output to elasticsearch.

Packetbeat config (for the webserver):
############################# Sniffer #########################################

Select the network interfaces to sniff the data. You can use the "any"

keyword to sniff on all connected interfaces.

interfaces:
device: any
############################# Protocols #######################################
protocols:
http:
ports:
- 80
- 8080
hide_keywords:
- pass
- password
- passwd
send_headers:
- User-Agent
- Cookie
- Set-Cookie
split_coookie: true
real_ip_header: X-Forwarded-For

############################# Libbeat Config ##################################

Base config file used by all other beats for using libbeat features

############################# Output ##########################################

Configure what outputs to use when sending the data collected by the beat.

Multiple outputs may be used.

output:

Elasticsearch as output

Elasticsearch as output

elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["elasticsearch01dev:9200","elasticsearch02dev:9200","kibana01dev:9200"]

# Number of workers per Elasticsearch host.
worker: 1

# Optional index name. The default is "packetbeat" and generates
# [packetbeat-]YYYY.MM.DD keys.

# Optional HTTP Path


# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90


# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
save_topology: true 


# Optional TLS. By default is off.
tls:
  # List of root certificates for HTTPS server verifications
  #certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for TLS client authentication
  certificate: "/etc/pki/filebeat/filebeat.pem"

  # Client Certificate Key
  certificate_key: "/etc/pki/filebeat/filebeat.key"

  # Controls whether the client verifies server certificates and host name.
  # If insecure is set to true, all server host names and certificates will be
  # accepted. In this mode TLS based connections are susceptible to
  # man-in-the-middle attacks. Use only for testing.
  insecure: true

  # Configure cipher suites to be used for TLS connections
  #cipher_suites: []

  # Configure curve types for ECDHE based cipher suites
  #curve_types: []

File as output

#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/packetbeat"

# Name of the generated files. The default is `packetbeat` and it generates files: `packetbeat`, `packetbeat.1`, `packetbeat.2`, etc.
#filename: packetbeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7

Console output

console:

# Pretty print json event
#pretty: false

############################# Shipper #########################################

shipper:

The tags of the shipper are included in their own field with each

transaction published. Tags make it easy to group servers by different

logical properties.

tags: [""]

Configure local GeoIP database support.

If no paths are not configured geoip is disabled.

geoip:
paths:
- /usr/share/GeoIP/GeoLiteCity.dat

It looks like Packetbeat crashed while reading your GeoIP data file. I recommend downloading a clean copy to ensure there is no corruption of that file and then retesting.

Thanks. The geoip data file was corrupt. Already tried changing the permissions of the file, but that wasn't it... Who could have thought that the file from the geoip package in our local repos was corrupt. You... :-)..

Thanks!!

SOLVED