Help reviewing filter/input/ouput Packetbeat nor working with LS

Hi there guys,

I've been having this issue Packetbeat index not created and no info from Kibana , I cannot ingest the data sent by packetbeat into elasticsearch through logstash, I configure the plugin to connect directly to ES and it works, so I need you assistance.

Apart from that, maybe I'm hitting my head against the wall unnecessary, is there any advantage configuring the plugin to connect directly to ES instead of LS?

I've configured filebeat and topbeat and are working through LS, this is my configuration:

00-log.conf ( for sensu , was trying to configure metrics, still no luck )
input {
tcp {
port => 5514
codec => "json"
type => "sensu-logs"
}
}

01-beats-input.conf ( for the beats )
input {
beats {
port => 5044
congestion_threshold => "60"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

10-syslog-filter.conf

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

11-sensu-filter.conf
filter {
if [type] == "sensu" {
date {
match => ["[check][issued]", "UNIX" ]
}
mutate {
remove_field => [ "host", "[client][handlers]", "[check][handlers]", "[check][history]", "[client][keepalive][handler]", "[client][keepalive][refresh]", "[client][keepalive][thresholds][critical]", "[client][keepalive][thresholds][warning]", "[client][subscriptions]", "[client][address]" ]
}
}
}

filter {
mutate {
add_field => { "event_id" => "%{[client][name]}%{[check][name]}%{[check][status]}" }
}

throttle {
after_count => 1
period => 86400
key => "%{event_id}"
add_tag => "throttled"
}
}

filter {
grok {
match => { "message" => "%{DATA:metric} %{DATA:value} %{INT:unixtime}" }
}
}

20-packetbeat-output.conf
output {

For debugging, remove later.

stdout { codec => rubydebug { metadata => true } }

If you need a conditional on the output you could use a tag. Don't use

type because it will be set to dns or http.

if "packetbeat" in [tags] {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}

30-elasticsearch-output.conf
output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Version numbers:
logstash-2.2.4-1.noarch
elasticsearch-2.4.0-1.noarch

Any help/advice really appreciated!!

Thanks for your time and support
Best regards

Guys, any help/advice on this?

Thanks you very much for your time and support
Regards

Are Packetbeat events being fed to stdout (which should be the case given the stdout output)? What does an example event look like?

Hi, this is the output I get when configured to connect directly to ES.

timestamp October 13th 2016, 13:33:01.071
t_id AVe_UaV5XF1rq7trfaeB
t_index packetbeat-2016.10.13
#_score
t_type http
tbeat.hostname servername
tbeat.name servername
#bytes_in 815
#bytes_out 208
tclient_ip x.x.x.x
#client_port 42,125
tclient_proc
tclient_server
#count 1
tdirection in
#http.code 200
#http.content_length 72
thttp.phrase OK
thttp.request_headers.accept application/json, text/plain, /
thttp.request_headers.accept-language es-AR,es;q=0.8,en-US;q=0.5,en;q=0.3
thttp.request_headers.connection Keep-Alive
thttp.request_headers.cookie uchiwa_theme=uchiwa-default; BIGipServerpool_servername01_80=2177738944.20480.0000; uchiwa_toastrSettings=%7B%22positionClass%22%3A%22toast-bottom-right%22%2C%22preventOpenDuplicates%22%3Atrue%2C%22timeOut%22%3A7500%7D; hideSilenced=false; hideClientsSilenced=false; hideOccurrences=false
thttp.request_headers.host x.x.x.x
thttp.request_headers.referer http://servername/
thttp.request_headers.user-agent Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0
?http.request_headers.via 1.1 servername
?http.request_headers.x-forwarded-for x.x.x.x,x.x.x.x,x.x.x
?http.request_headers.x-forwarded-host servername, servername
?http.request_headers.x-forwarded-server servername, servername
thttp.response_headers.connection close
thttp.response_headers.content-length 72
thttp.response_headers.content-type text/plain; charset=utf-8
thttp.response_headers.date Thu, 13 Oct 2016 18:33:01 GMT
tip x.x.x.x
tmethod GET
tparams
tpath /health
#port 80
tproc
tquery GET /health
treal_ip x.x.x.x
#responsetime 0
tserver
tstatus OK
?tags servername-tag-inconfig-file
ttype http

Trying again:

  • Is there anything in the Packetbeat log that indicates any problems connecting to Logstash?
  • Are Packetbeat events being fed to Logstash's stdout (which should be the case given the stdout output)?
  • If yes, what does an example event produced by the stdout output look like?

Hi Magnus, sorry for the delay, had some issued with the VM and had to start it over, this is a new fresh installation.

Logstash stuff:
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["z77s-daem04.zebra.lan"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Packet Beat configuration:

interfaces:
  device: eth0
protocols:
  http:
    ports: [80]
    send_all_headers: true
    split_coookie: true
    real_ip_header: "X-Forwarded-For"
output:
  logstash:
    hosts: ["server:5044"]
    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  file:
    path: "/var/log/packetbeat"
    filename: packetbeat.log
    number_of_files: 7
shipper:
  name: servername
  ignore_outgoing: true
  refresh_topology_freq: 60
  topology_expire: 120
  queue_size: 1000
  geoip:
    paths:
      - "/usr/share/GeoIP/GeoLiteCity.dat"
logging:
  to_syslog: true
  to_files: true
  files:
    path: /var/log/packetbeat
    name: packetbeat.log

    keepfiles: 7
  selectors: ["*"]
  level: debug

No index is created:

yellow open   filebeat-2016.10.16   5   1      10688            0      4.7mb          4.7mb
yellow open   filebeat-2016.10.17   5   1      16288            0      6.5mb          6.5mb
yellow open   filebeat-2016.10.18   5   1      16255            0      6.6mb          6.6mb
yellow open   filebeat-2016.10.19   5   1      12616            0      5.4mb          5.4mb
yellow open   .kibana               1   1        103            0     88.2kb         88.2kb
yellow open   filebeat-2016.10.14   5   1        696            0    833.7kb        833.7kb
yellow open   filebeat-2016.10.15   5   1       1647            0      1.6mb          1.6mb
yellow open   filebeat-2016.10.20   5   1     156814            0     39.3mb         39.3mb
yellow open   topbeat-2016.10.20    5   1       9312            0      3.4mb          3.4mb

From the logs:

2016-10-20T10:36:55-05:00 DBG  Init a MongoDB protocol parser
2016-10-20T10:36:55-05:00 DBG  Local IP addresses: [127.0.0.1 x.x.x.x]
2016-10-20T10:36:55-05:00 DBG  tcp%!(EXTRA string=Port map: %v, map[uint16]protos.Protocol=map[80:http])

2016-10-20T10:36:55-05:00 DBG  Initializing sniffer
2016-10-20T10:36:55-05:00 DBG  BPF filter: tcp port 80
2016-10-20T10:36:55-05:00 DBG  Sniffer type: pcap device: eth0
2016-10-20T10:36:55-05:00 DBG  Layer type: Ethernet
2016-10-20T10:36:55-05:00 INFO packetbeat sucessfully setup. Start running
2016-10-20T10:53:43-05:00 DBG  Interrupted
2016-10-20T10:53:43-05:00 DBG  Interrupted
2016-10-20T10:53:44-05:00 DBG  Interrupted
2016-10-20T10:53:44-05:00 DBG  Interrupted
2016-10-20T10:53:45-05:00 DBG  Interrupted
2016-10-20T10:53:45-05:00 DBG  Interrupted
2016-10-20T10:53:46-05:00 DBG  Interrupted
2016-10-20T10:53:46-05:00 DBG  Interrupted

Can you please shed some lights on this?
Thank you very much for your time and support
Regared

I do not want to waste your time, but reading this https://www.elastic.co/guide/en/beats/packetbeat/5.0/faq.html makes any difference to point the configuration directly to ES ? doing it that way it works.

Thanks
Regards

Please edit your post and make sure the configuration is formatted as preformatted text (there's a toolbar button for that) and that the indentation looks exactly like in your actual configuration. Indentation is important in YAML files and if you don't post your file exactly like it is we might be unable to spot errors.

Hi Magnus, I preformated the text!, as I said before, I configured it to connect directlly to ES ( filebeat and topbeat are configured to connect to logstash ), but packetbeat still doesn't work.

Thanks for your time and support
Regards