Metricbeat data gets into logstash but cannot index logs

Hi there,

I'm newbie to elastic and I'm trying to setup topbeat to ship logs to logstash. I have installed filebeat on my client and it can ship logs to logstash.

Topbeat logs from client -

2017-08-14T13:04:05-04:00 DBG  output worker: publish 262 events
2017-08-14T13:04:05-04:00 DBG  Try to publish 262 events to logstash with window size 265
2017-08-14T13:04:05-04:00 DBG  262 events out of 262 events sent to logstash. Continue sending ...
2017-08-14T13:04:05-04:00 DBG  send completed

Logstash logs from ELK host -

#<LogStash::Event:0x9fa54f4 @metadata_accessors=#<LogStash::Util::Accessors:0x65add25c @store={"type"=>"filesystem", "beat"=>"topbeat"}, @lut={"[type]"=>[{"type"=>"filesystem", "beat"=> topbeat"}, "type"], "[beat]"=>[{"type"=>"filesystem", "beat"=>"topbeat"}, "beat"]}>, @cancelled=false, @data={"fs"=>{"device_name"=>"tmpfs", "total"=>414363648, "used"=>0, "used_p"=>0, "free"=>414363648, "avail"=>414363648, "files"=>505813, "free_files"=>505809, "mount_point"=>"/run/user/1000"}, "count"=>1, "beat"=>{"hostname"=>"devlog", "name"=>"devlog"}, "@timestamp"=>"2017-08-14T16:26:00.395Z", "type"=>"filesystem", "@version"=>"1", "host"=>"devlog", "tags"=>["beats_input_raw_event"]}, @metadata={"type"=>"filesystem", "beat"=>"topbeat"}, @accessors=#<LogStash::Util::Accessors:0x71c47fb4 @store={"fs"=>{"device_name"=>"tmpfs", "total"=>414363648, "used"=>0, "used_p"=>0, "free"=>414363648, "avail"=>414363648, "files"=>505813, "free_files"=>505809, "mount_point"=>"/run/user/1000"}, "count"=>1, "beat"=>{"hostname"=>"devlog", "name"=>"devlog"}, "@timestamp"=>"2017-08-14T16:26:00.395Z", "type"=>"filesystem", "@version"=>"1", "host"=>"devlog", "tags"=>["beats_input_raw_event"]}, @lut={"[beat][hostname]"=>[{"hostname"=>"devlog", "name"=>"devlog"}, "hostname"], "host"=>[{"fs"=>{"device_name"=>"tmpfs", "total"=>414363648, "used"=>0, "used_p"=>0, "free"=>414363648, "avail"=>414363648, "files"=>505813, "free_files"=>505809, "mount_point"=>"/run/user/1000"}, "count"=>1, "beat"=>{"hostname"=>"devlog", "name"=>"devlog"}, "@timestamp"=>"2017-08-14T16:26:00.395Z", "type"=>"filesystem", "@version"=>"1", "host"=>"devlog", "tags"=>["beats_input_raw_event"]}, "host"], "tags"=>[{"fs"=>{"device_name"=>"tmpfs", "total"=>414363648, "used"=>0, "used_p"=>0, "free"=>414363648, "avail"=>414363648, "files"=>505813, "free_files"=>505809, "mount_point"=>"/run/user/1000"}, "count"=>1, "beat"=>{"hostname"=>"devlog", "name"=>"devlog"}, "@timestamp"=>"2017-08-14T16:26:00.395Z", "type"=>"filesystem", "@version"=>"1", "host"=>"devlog", "tags"=>["beats_input_raw_event"]}, "tags"], "[type]"=>[{"fs"=>{"device_name"=>"tmpfs", "total"=>414363648, "used"=>0, "used_p"=>0, "free"=>414363648, "avail"=>414363648, "files"=>505813, "free_files"=>505809, "mount_point"=>"/run/user/1000"}, "count"=>1, "beat"=>{"hostname"=>"devlog", "name"=>"devlog"}, "@timestamp"=>"2017-08-14T16:26:00.395Z", "type"=>"filesystem", "@version"=>"1", "host"=>"devlog", "tags"=>["beats_input_raw_event"]}, "type"]}>>], :response=>{"index"=>{"_index"=>"topbeat-2017.08.14", "_type"=>"filesystem", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"topbeat-2017.08.14", "index"=>"topbeat-2017.08.14"}}}, :level=>:warn}

With curl on ELK host it returns 0 hits.

root@devlog:/home# curl -XGET 'http://host_name:9200/topbeat-*/_search?pretty'
{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

Not sure what the issue is. I have SSL configured correctly on both hosts.

*to some extent I can relate my issue to Trouble getting topbeat data into elasticsearch via logstash

How about using Metricbeat instead of Topbeat? Topbeat has been replaced by Metricbeat.

Thanks @andrewkroh for your quick response.

So I installed metricbeat on my client server and it ships logs to logstash. But I still cannot move forward with es and then to Kibana. I removed topbeat from my client server but I can still see some logs related to topbeat.

Logstash logs from ELK server (metricbeat).

LogStash::Util::Accessors:0x72e84d70 @store={"type"=>"process", "beat"=>"topbeat"}, @lut={"[type]"=>[{"type"=>"process", "beat"=>"topbeat"}, "type"], "[beat]"=>[{"type"=>"process", "beat"=>"topbeat"}, "beat"]}>, @cancelled=false, @data={"@timestamp"=>"2017-08-14T18:33:44.250Z", "type"=>"process", "count"=>1, "proc"=>{"cmdline"=>"tail -f metricbeat", "cpu"=>{"user"=>30, "user_p"=>0, "system"=>600, "total"=>630, "start_time"=>"14:31"}, "mem"=>{"size"=>7499776, "rss"=>802816, "rss_p"=>0, "share"=>729088}, "name"=>"tail", "pid"=>5773, "ppid"=>28294, "state"=>"sleeping", "username"=>"root"}, "beat"=>{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "@version"=>"1", "host"=>"db1-postgres", "tags"=>["beats_input_raw_event"]}, @metadata={"type"=>"process", "beat"=>"topbeat"}, @accessors=#<LogStash::Util::Accessors:0x7c150a78 @store={"@timestamp"=>"2017-08-14T18:33:44.250Z", "type"=>"process", "count"=>1, "proc"=>{"cmdline"=>"tail -f metricbeat", "cpu"=>{"user"=>30, "user_p"=>0, "system"=>600, "total"=>630, "start_time"=>"14:31"}, "mem"=>{"size"=>7499776, "rss"=>802816, "rss_p"=>0, "share"=>729088}, "name"=>"tail", "pid"=>5773, "ppid"=>28294, "state"=>"sleeping", "username"=>"root"}, "beat"=>{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "@version"=>"1", "host"=>"db1-postgres", "tags"=>["beats_input_raw_event"]}, @lut={"[beat][hostname]"=>[{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "hostname"], "host"=>[{"@timestamp"=>"2017-08-14T18:33:44.250Z", "type"=>"process", "count"=>1, "proc"=>{"cmdline"=>"tail -f metricbeat", "cpu"=>{"user"=>30, "user_p"=>0, "system"=>600, "total"=>630, "start_time"=>"14:31"}, "mem"=>{"size"=>7499776, "rss"=>802816, "rss_p"=>0, "share"=>729088}, "name"=>"tail", "pid"=>5773, "ppid"=>28294, "state"=>"sleeping", "username"=>"root"}, "beat"=>{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "@version"=>"1", "host"=>"db1-postgres", "tags"=>["beats_input_raw_event"]}, "host"], "tags"=>[{"@timestamp"=>"2017-08-14T18:33:44.250Z", "type"=>"process", "count"=>1, "proc"=>{"cmdline"=>"tail -f metricbeat", "cpu"=>{"user"=>30, "user_p"=>0, "system"=>600, "total"=>630, "start_time"=>"14:31"}, "mem"=>{"size"=>7499776, "rss"=>802816, "rss_p"=>0, "share"=>729088}, "name"=>"tail", "pid"=>5773, "ppid"=>28294, "state"=>"sleeping", "username"=>"root"}, "beat"=>{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "@version"=>"1", "host"=>"db1-postgres", "tags"=>["beats_input_raw_event"]}, "tags"], "[type]"=>[{"@timestamp"=>"2017-08-14T18:33:44.250Z", "type"=>"process", "count"=>1, "proc"=>{"cmdline"=>"tail -f metricbeat", "cpu"=>{"user"=>30, "user_p"=>0, "system"=>600, "total"=>630, "start_time"=>"14:31"}, "mem"=>{"size"=>7499776, "rss"=>802816, "rss_p"=>0, "share"=>729088}, "name"=>"tail", "pid"=>5773, "ppid"=>28294, "state"=>"sleeping", "username"=>"root"}, "beat"=>{"hostname"=>"db1-postgres", "name"=>"db1-postgres"}, "@version"=>"1", "host"=>"db1-postgres", "tags"=>["beats_input_raw_event"]}, "type"]}>>], :response=>{"index"=>{"_index"=>"topbeat-2017.08.14", "_type"=>"process", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"topbeat-2017.08.14", "index"=>"topbeat-2017.08.14"}}}, :level=>:warn}

When I curl from my ELK server for metricbeat, it returns with 0 hits.

curl -XGET 'http://host_name:9200/metricbeat-*/_search?pretty'
{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

Please share the configuration you are using with Metricbeat and Logstash. Are there any error message in the Metricbeat logs?

@andrewkroh I was able to get rid of those error, by upgrading entire ELK stack to latest version, not sure what I was running before. I got into other errors lately, please see below.

Error -

  "error": "error making http request: Get http://host_name/server-status?auto=: dial tcp host_name:80: getsockopt: connection refused",
      "cmdline": "grep --color=auto error",
        "errors": 0,
        "errors": 0,
        "errors": 0,
        "errors": 0,

Config file for metricbeat -

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================
metricbeat.modules:

#------------------------------- System Module -------------------------------
- module: system
  metricsets:
    # CPU stats
    - cpu

    # System Load stats
    - load

    # Per CPU core stats
    - core

    # IO stats
    - diskio

    # Per filesystem stats
    - filesystem

    # File system summary stats
    - fsstat

    # Memory stats
    - memory

    # Network stats
    - network

    # Per process stats
    - process

    # Sockets (linux only)
    #- socket
  enabled: true
  period: 10s
  processes: ['.*']
  cpu_ticks: false
- module: apache
  metricsets: ["status"]
  enabled: true
  period: 10s
  hosts: ["http://10.152.58.25"]

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
env: production

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["logger.ottonet.local:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/metricbeat
  name: metricbeat.log
  keepfiles: 7

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
logging.selectors: ["*"]

Logstash Config -

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch {
    hosts => ["host_name:9200"]
    sniffing => false
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    user => elastic
    password => elastic
  }
    stdout { codec => rubydebug }
}

Looking at error,does it means Apache server is not reachable?

Thanks in advance!

Yeah, it's requesting the content at http://host_name/server-status?auto= and the connection is being refused. Possibly the host is down or service is down? Can you curl that URL?

@andrewkroh that was silly me. It was listening on other port instead of default 80. But still receiving below error/INFO -

ERR Connecting error publishing events (retrying): read tcp X.X.X.X:55842->X.X.X.X:5044: read: connection reset by peer

INFO Non-zero metrics in the last 30s: fetches.system-process.events=1 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=161 libbeat.outputs.messages_dropped=1 libbeat.publisher.messages_in_worker_queues=1 libbeat.publisher.published_events=1

I'm receiving same error for all beats - filebeat/metricbeat/packetbeat/heartbeat.

LS - v5.5.1 Beats ~ v5.5.1 (for all)

That's an error with the connection to Logstash. It can happen under normal circumstances if the connection is idle for a period of time. If no events are ever being published to LS then maybe it's some kind of SSL issue in establishing the connection (try setting output.logstash.ssl.verification_mode: none and see if it starts working).

Thanks @andrewkroh . I was out of options, so I started to build ELK stack again. Will update if I get into any issue later.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.