Packetbeat index not created and no info from Kibana

Hi there guys,
I'm new to ELK stack, I was able to install ELK, I can see some dashbords, topbeat for example, I can discover over filebeat index also but no luck with packet beat, this is what I've got.

curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open %{[@metadata][logstash]}-2016.10.03 5 1 341 0 357.9kb 357.9kb
yellow open filebeat-2016.04.16 5 1 5858 0 2.2mb 2.2mb
yellow open %{[@metadata][logstash]}-2016.10.02 5 1 202 0 267.9kb 267.9kb
yellow open %{[@metadata][logstash]}-2016.10.04 5 1 50744 0 25mb 25mb
yellow open topbeat-2016.09.23 5 1 611555 0 165.6mb 165.6mb
yellow open %{[@metadata][sensu]}-2016.10.04 5 1 35780 0 14.8mb 14.8mb
yellow open filebeat-2016.09.21 5 1 111446 0 42.5mb 42.5mb
yellow open %{[@metadata][beat]}-2016.10.03 5 1 56 0 152.8kb 152.8kb
yellow open filebeat-2016.09.22 5 1 121336 0 46.1mb 46.1mb
yellow open filebeat-2016.09.20 5 1 93355 0 33.7mb 33.7mb
yellow open filebeat-2016.10.02 5 1 202 0 172.8kb 172.8kb
yellow open filebeat-2016.10.04 5 1 12527 0 9.3mb 9.3mb
yellow open topbeat-2016.10.04 5 1 38219 0 13.8mb 13.8mb
yellow open filebeat-2016.10.03 5 1 341 0 250.3kb 250.3kb
yellow open %{[@metadata][beat]}-2016.09.16 5 1 22 0 181.7kb 181.7kb
yellow open filebeat-2016.09.11 5 1 200 0 274.5kb 274.5kb
yellow open filebeat-2016.09.12 5 1 2302 0 1.8mb 1.8mb
yellow open filebeat-2016.09.13 5 1 3349 0 2.7mb 2.7mb
yellow open .kibana 1 1 106 0 91.5kb 91.5kb
yellow open filebeat-2016.09.18 5 1 68771 0 23.3mb 23.3mb
yellow open filebeat-2016.09.19 5 1 74057 0 26.1mb 26.1mb
yellow open filebeat-2016.08.11 5 1 13 0 75.4kb 75.4kb
yellow open filebeat-2016.09.14 5 1 3568 0 3.1mb 3.1mb
yellow open filebeat-2016.09.15 5 1 948288 0 168mb 168mb
yellow open filebeat-2016.09.16 5 1 50915 0 15.2mb 15.2mb
yellow open filebeat-2016.09.17 5 1 63702 0 20.1mb 20.1mb

Packetbeat client configuration:
grep -v '#' packetbeat.yml

interfaces:
  device: any
protocols:
  http:
    ports: [80]

procs:
  enabled: false
  monitored:
    - process: sshd
      cmdline_grep: sshd

output:
  logstash:
    hosts: ["server:5044"]
    index: packetbeat
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  file:
    path: "/var/log/packetbeat"
   filename: packetbeat.log
    number_of_files: 60
shipper:
  tags: ["server-test"]
  ignore_outgoing: true
 refresh_topology_freq: 60
  topology_expire: 120
  queue_size: 1000

  geoip:
    paths:
      - "/usr/share/GeoIP/GeoLiteCity.dat"

logging:
  to_syslog: true
  to_files: true
  files:
    path: /var/log/packetbeat
    name: packetbeat.log
    keepfiles: 60

  level: debug

NOTE: ( cert is working, I'm using it with filebeat and topbeat )

Under kibana when I select the index I've get this message:

Mapping conflict! A field is defined as several types
(string, integer, etc) across the indices that match this pattern. You
may still be able to use these conflict fields in parts of Kibana, but
they will be unavailable for functions that require Kibana to know their
type. Correcting this issue will require reindexing your data.

On the server side:

curl -XGET 'http://localhost:9200/packetbeat-*/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}

Software version:
kibana-4.4.2-1.x86_64
elasticsearch-2.4.0-1.noarch
packetbeat-1.3.1-1.x86_64
Red Hat Enterprise Linux Server release 6.7 (Santiago)

Any help appreciated
Best regards

Do you run it with sudo?
Also try debug mode.

I moved your question to #beats

Hi
No I do not run it with sudo, where do I enable debug mode? elasticsearch/kiebana/logstash? what extra info do you need?

Thank you very much
Regards

Reformated, this is the configuration I've got:

Output:
output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

output {

For debugging, remove later.

stdout { codec => rubydebug { metadata => true } }

If you need a conditional on the output you could use a tag. Don't use

type because it will be set to dns or http.

if "packetbeat" in [tags] {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}

Filter:

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Sensu filter:
filter {
if [type] == "sensu" {
date {
match => ["[check][issued]", "UNIX" ]
}
mutate {
remove_field => [ "host", "[client][handlers]", "[check][handlers]", "[check][history]", "[client][keepalive][handler]", "[client][keepalive][refresh]", "[client][keepalive][thresholds][critical]", "[client][keepalive][thresholds][warning]", "[client][subscriptions]", "[client][address]" ]
}
}
}

filter {
mutate {
add_field => { "event_id" => "%{[client][name]}%{[check][name]}%{[check][status]}" }
}

throttle {
after_count => 1
period => 86400
key => "%{event_id}"
add_tag => "throttled"
}
}

filter {
grok {
match => { "message" => "%{DATA:metric} %{DATA:value} %{INT:unixtime}" }
}
}

input {
tcp {
port => 5514
codec => "json"
type => "sensu-logs"
}
}

Input:
input {
beats {
port => 5044
congestion_threshold => "60"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

Thanks in advance
Regards

Still cannot see any index related to packetbeat.

curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open filebeat-2016.10.04 5 1 2597 0 1.7mb 1.7mb
yellow open topbeat-2016.10.04 5 1 13764 0 4.3mb 4.3mb
yellow open .kibana 5 1 103 0 130kb 130kb

It seems that that output is part of the filebeat and not from packetbeat itself

"@timestamp" => "2016-10-04T20:09:33.075Z",
"type" => "process",
"count" => 1,
"proc" => {
"cmdline" => "/usr/bin/packetbeat -c /etc/packetbeat/packetbeat.yml",
"cpu" => {
"user" => 390,
"user_p" => 0.0013,
"system" => 120,
"total" => 510,
"start_time" => "15:05"
},
"mem" => {
"size" => 504987648,
"rss" => 66396160,
"rss_p" => 0,
"share" => 9719808
},
"name" => "packetbeat",
"pid" => 29837,
"ppid" => 29836,
"state" => "sleeping",
"username" => "root"

Any help on this?

Lotta Thanks!

To make sure Packetbeat works as intended, I would recommend you to start it with the options -e -d "*" to check if the expected output is created.

Looking at the above outputs it seems to me you started to mix several things as you tried several things at once. To identify the issues I strongly recommend to do one thing at the time.

Can you please format your code parts in all the post above with three ticks ` before and after to make it readable?

Hi, ok, I started packetbeat as you suggested:

2016/10/05 18:39:26.259284 client.go:100: DBG connect
2016/10/05 18:39:26.454283 outputs.go:126: INFO Activated logstash as output plugin.
2016/10/05 18:39:26.454296 publish.go:232: DBG Create output worker
2016/10/05 18:39:26.454326 publish.go:232: DBG Create output worker
2016/10/05 18:39:26.454359 publish.go:274: DBG No output is defined to store the topology. The server fields might not be filled.
2016/10/05 18:39:26.454397 publish.go:288: INFO Publisher name: z77stpuppetd01
2016/10/05 18:39:26.454544 async.go:78: INFO Flush Interval set to: -1ms
2016/10/05 18:39:26.454553 async.go:84: INFO Max Bulk Size set to: -1
2016/10/05 18:39:26.454558 async.go:78: INFO Flush Interval set to: 1s
2016/10/05 18:39:26.454562 async.go:84: INFO Max Bulk Size set to: 2048
2016/10/05 18:39:26.454567 async.go:92: DBG create bulk processing worker (interval=1s, bulk size=2048)
2016/10/05 18:39:26.454607 beat.go:168: INFO Init Beat: packetbeat; Version: 1.3.1
2016/10/05 18:39:26.455153 procs.go:88: INFO Process matching enabled
2016/10/05 18:39:26.455275 packetbeat.go:166: DBG Initializing protocol plugins
2016/10/05 18:39:26.455294 procs.go:147: DBG In RefreshPids
2016/10/05 18:39:26.455308 procs.go:147: DBG In RefreshPids
2016/10/05 18:39:26.455354 mongodb.go:73: DBG Init a MongoDB protocol parser
2016/10/05 18:39:26.455385 memcache.go:105: DBG init memcache plugin
2016/10/05 18:39:26.455393 memcache.go:158: DBG maxValues = 0
2016/10/05 18:39:26.455398 memcache.go:159: DBG maxBytesPerValue = 2147483647
2016/10/05 18:39:26.455503 icmp.go:69: DBG Local IP addresses: [127.0.0.1 10.77.1.146 ::1 fe80::250:56ff:fe88:32d6]
2016/10/05 18:39:26.455544 tcp.go:293: DBG tcp%!(EXTRA string=Port map: %v, map[uint16]protos.Protocol=map[80:http])
2016/10/05 18:39:26.455553 udp.go:93: DBG Port map: map[]
2016/10/05 18:39:26.455560 packetbeat.go:212: DBG Initializing sniffer
2016/10/05 18:39:26.455570 sniffer.go:251: DBG BPF filter: tcp port 80
2016/10/05 18:39:26.455577 sniffer.go:130: DBG Sniffer type: pcap device: any
2016/10/05 18:39:26.466334 decoder.go:63: DBG Layer type: Linux SLL
2016/10/05 18:39:26.466412 beat.go:194: INFO packetbeat sucessfully setup. Start running.
2016/10/05 18:39:26.466434 packetbeat.go:244: DBG Waiting for the sniffer to finish
2016/10/05 18:39:26.966980 sniffer.go:297: DBG Interrupted
2016/10/05 18:39:27.455347 procs.go:149: DBG In RefreshPids tick
2016/10/05 18:39:27.455351 procs.go:149: DBG In RefreshPids tick
2016/10/05 18:39:27.467119 sniffer.go:297: DBG Interrupted
2016/10/05 18:39:27.514184 procs.go:155: DBG RefreshPids found pids [%!s(int=6656) %!s(int=13549) %!s(int=13552)] for process sshd
2016/10/05 18:39:27.514308 procs.go:155: DBG RefreshPids found pids [%!s(int=6814) %!s(int=6820) %!s(int=6821) %!s(int=6822) %!s(int=6823) %!s(int=6824)] for process zabbix_agentd

I see nothing else on the screen but the data that seems be being sent to the server..
What other info do yo need? do you want me to paste the logstash configuration? input/output/filter?

As I mention before, file beat and top beat are working.

Thanks for your time and support
Regards

These are the indices:
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 5 1 104 0 123.7kb 123.7kb
yellow open filebeat-2016.09.30 5 1 4551 0 2.9mb 2.9mb
yellow open filebeat-2016.10.05 5 1 2175421 0 607.6mb 607.6mb
yellow open filebeat-2016.09.29 5 1 4963 0 3.3mb 3.3mb
yellow open filebeat-2016.04.16 5 1 2356 0 1mb 1mb
yellow open filebeat-2016.10.02 5 1 7725 0 4.5mb 4.5mb
yellow open filebeat-2016.09.27 5 1 1174 0 996.8kb 996.8kb
yellow open filebeat-2016.09.28 5 1 3234 0 2.3mb 2.3mb
yellow open filebeat-2016.10.01 5 1 4538 0 3mb 3mb
yellow open topbeat-2016.10.05 5 1 77087 0 47.9mb 47.9mb
yellow open filebeat-2016.10.04 5 1 7715 0 4.6mb 4.6mb
yellow open filebeat-2016.10.03 5 1 7933 0 4.7mb 4.7mb

Can you please format your posts as requested above? Very hard to read pure text code.

I recommend you to point packetbeat at elasticsearch directly first and see what happens.

Ok gotta reconfigure it cuz I've got it listening in localhost:9200

At last!!! pointing it to elasticsearch directly it created the index!!

curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open filebeat-2016.09.30 5 1 4551 0 2.9mb 2.9mb
yellow open filebeat-2016.04.16 5 1 2356 0 1mb 1mb
yellow open packetbeat-2016.10.05 5 1 6 0 34.8kb 34.8kb
yellow open .kibana 5 1 104 0 123.7kb 123.7kb

I can see also the data on the dashboard..

I formated some info as you suggested, so as pointing directly to ES works, maybe it's a logstash configuration issue? Do you want to collect any particular log?

Thanks for your time and support
Regards

Hi, can you help me review the logstash filter/input/output please?

Thanks for your time and support
Regards

For Logstash question please post into the Logstash forum. Also in the Logstash forum it will be appreciated if you format your posts properly.