New filebeat/ELK stack user: trouble with Raspberry

Hello all.

I have an ELK stack running on a CentOS server. I also have logging satellites running CentOS, and those are sending logs using filebeat. Fine, and works nice.

I'm also trying to set up my Raspberrys for filebeat, and while filebeat is running and looks working, nothing appears into Elastic.

There is a session:

gauntlet:~# /opt/filebeat/filebeat-linux-arm -c /etc/filebeat/filebeat.yml -e -d output
2017/01/03 13:24:12.305137 beat.go:267: INFO Home path: [/opt/filebeat] Config path: [/opt/filebeat] Data path: [/opt/filebeat/data] Logs path: [/opt/filebeat/logs]
2017/01/03 13:24:12.305446 beat.go:177: INFO Setup Beat: filebeat; Version: 6.0.0-alpha1-git1744740
2017/01/03 13:24:12.305505 logp.go:219: INFO Metrics logging every 30s
2017/01/03 13:24:12.306409 output.go:167: INFO Loading template enabled. Reading template file: /opt/filebeat/filebeat.template.json
2017/01/03 13:24:12.334737 output.go:178: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /opt/filebeat/filebeat.template-es2x.json
2017/01/03 13:24:12.353874 client.go:120: INFO Elasticsearch url: http://elasticsearch-dev:9200
2017/01/03 13:24:12.354158 outputs.go:106: INFO Activated elasticsearch as output plugin.
2017/01/03 13:24:12.355446 publish.go:291: INFO Publisher name: gauntlet
2017/01/03 13:24:12.356439 async.go:63: INFO Flush Interval set to: 1s
2017/01/03 13:24:12.356553 async.go:64: INFO Max Bulk Size set to: 50
2017/01/03 13:24:12.357595 beat.go:207: INFO filebeat start running.
2017/01/03 13:24:12.357981 registrar.go:85: INFO Registry file set to: /opt/filebeat/data/registry
2017/01/03 13:24:12.358282 registrar.go:106: INFO Loading registrar data from /opt/filebeat/data/registry
2017/01/03 13:24:12.361308 registrar.go:131: INFO States Loaded from registrar: 6
2017/01/03 13:24:12.361561 crawler.go:34: INFO Loading Prospectors: 5
2017/01/03 13:24:12.362314 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/03 13:24:12.363741 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/03 13:24:12.364536 registrar.go:230: INFO Starting Registrar
2017/01/03 13:24:12.364536 sync.go:41: INFO Start sending events to output
2017/01/03 13:24:12.366173 prospector_log.go:57: INFO Prospector with previous states loaded: 2
2017/01/03 13:24:12.368378 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/03 13:24:12.368885 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/01/03 13:24:12.370058 prospector_log.go:57: INFO Prospector with previous states loaded: 1
2017/01/03 13:24:12.371198 crawler.go:46: INFO Loading Prospectors completed. Number of prospectors: 5
2017/01/03 13:24:12.371672 crawler.go:61: INFO All prospectors are initialised and running with 6 states to persist
2017/01/03 13:24:12.371706 prospector.go:111: INFO Starting prospector of type: log
2017/01/03 13:24:12.371740 prospector.go:111: INFO Starting prospector of type: log
2017/01/03 13:24:12.371806 prospector.go:111: INFO Starting prospector of type: log
2017/01/03 13:24:12.371804 prospector.go:111: INFO Starting prospector of type: log
2017/01/03 13:24:12.371866 prospector.go:111: INFO Starting prospector of type: log
2017/01/03 13:24:12.379202 log.go:84: INFO Harvester started for file: /var/log/auth.log
2017/01/03 13:24:12.379883 log.go:84: INFO Harvester started for file: /var/log/syslog
2017/01/03 13:24:12.383826 log.go:84: INFO Harvester started for file: /var/log/mail.log
2017/01/03 13:24:17.438123 client.go:652: INFO Connected to Elasticsearch version 5.1.1
2017/01/03 13:24:17.438358 output.go:214: INFO Trying to load template for client: http://elasticsearch-dev:9200
2017/01/03 13:24:17.440716 output.go:235: INFO Template already exists and will not be overwritten.
2017/01/03 13:24:17.612737 single.go:150: DBG  send completed
2017/01/03 13:24:17.759963 single.go:150: DBG  send completed
2017/01/03 13:24:27.503592 single.go:150: DBG  send completed
2017/01/03 13:24:37.521485 single.go:150: DBG  send completed
2017/01/03 13:24:42.307568 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=4 libbeat.es.publish.read_bytes=2107 registar.states.current=6 filebeat.harvester.running=3 libbeat.es.publish.write_bytes=39767 libbeat.publisher.published_events=85 libbeat.es.published_and_acked_events=85 publish.events=94 filebeat.harvester.open_files=3 registrar.writes=3 registrar.states.update=94 filebeat.harvester.started=3
2017/01/03 13:24:42.498645 single.go:150: DBG  send completed
2017/01/03 13:24:47.458720 single.go:150: DBG  send completed

Here is the Yaml:

gauntlet:~# cat /etc/filebeat/filebeat.yml 
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
      input_type: log
      document_type: auth
      scan_frequency: 1s

    -
      paths:
        - /var/log/apache2/access.log
      input_type: log
      document_type: apache_access
      scan_frequency: 1s

    -
      paths:
        - /var/log/apache2/error.log
      input_type: log
      document_type: apache_error
      scan_frequency: 1s

    -
      paths:
        - /var/log/mail.log
      input_type: log
      document_type: mail
      scan_frequency: 1s

    - paths:
        - /var/log/syslog
      input_type: log
      document_type: syslog
      scan_frequency: 5s

output:
  elasticsearch:
    hosts: ["elasticsearch-dev:9200"]
    index: "filebeat"

#  logstash:
#    # The Logstash hosts
#    hosts: ["elasticsearch-dev:5044"]
#    index: "filebeat"

What might be wrong in my setup for Raspberry? Like I said, the CentOSes work fine.

Hi @jarif, log seems good to me. Can you please share the output from the following command
curl http://elasticsearch-dev:9200/filebeat/_stats/docs?pretty=true

gauntlet:~# curl http://elasticsearch-dev:9200/filebeat/_stats/docs?pretty=true
{
  "_shards" : {
    "total" : 10,
    "successful" : 5,
    "failed" : 0
  },
  "_all" : {
    "primaries" : {
      "docs" : {
        "count" : 236329,
        "deleted" : 0
      }
    },
    "total" : {
      "docs" : {
        "count" : 236329,
        "deleted" : 0
      }
    }
  },
  "indices" : {
    "filebeat" : {
      "primaries" : {
        "docs" : {
          "count" : 236329,
          "deleted" : 0
        }
      },
      "total" : {
        "docs" : {
          "count" : 236329,
          "deleted" : 0
        }
      }
    }
  }
}

Looks like there are entries in the index. How do you figure out that elasticsearch is not getting events from your raspberry?

I do not see anything in Kibana screen but constant flow of messages from my CentOSes. Nothing from "gauntlet"...

I assume you use different indices or do you write into the same index? Maybe you have some filter in kibana for your CentOS machines?

Not thatI am aware of. I just created ELK stack using https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7 and then installed my raspi beat using http://ict.renevdmark.nl/2016/07/05/elastic-beats-on-raspberry-pi/

I have not done anything in Kibana besides that. Only index that I am aware of is "filebeat"

Can you post some json example from kibana.

Means that I have to learn to use the dev tool now;) All right.

If you open a message in kibana you have a json tab which shows the raw data.

Ahm, yes.

{
  "_index": "filebeat-2017.01.03",
  "_type": "syslog",
  "_id": "AVlk-Gn3wm5zYvOseB0f",
  "_score": null,
  "_source": {
    "@timestamp": "2017-01-03T15:35:15.129Z",
    "beat": {
      "hostname": "sunderland.fredriksson.dy.fi",
      "name": "sunderland.fredriksson.dy.fi"
    },
    "count": 1,
    "fields": null,
    "input_type": "log",
    "message": "Jan  3 17:35:09 sunderland guardrail_agent: [03 Jan 2017 05:35:09pm EET][DEBUG]   []: Polling secure for tasks",
    "offset": 1670136,
    "source": "/var/log/messages",
    "type": "syslog"
  },
  "fields": {
    "@timestamp": [
      1483457715129
    ]
  },
  "sort": [
    1483457715129
  ]
}

Hmm this works in dev tool!

GET /_search
{
    "query": {
        "query_string" : {
            "default_field" : "message",
            "query" : "gauntlet"
        }
    }
}

It returns:

{
  "took": 9,
  "timed_out": false,
  "_shards": {
    "total": 16,
    "successful": 16,
    "failed": 0
  },
  "hits": {
    "total": 121856,
    "max_score": 0.82289654,
    "hits": [
      {
        "_index": "filebeat",
        "_type": "mail",
        "_id": "AVlkUfdNwm5zYvOsdYWp",
        "_score": 0.82289654,
        "_source": {
          "@timestamp": "2017-01-03T12:33:25.722Z",
          "beat": {
            "hostname": "gauntlet",
            "name": "gauntlet",
            "version": "6.0.0-alpha1-git1744740"
          },
          "input_type": "log",
          "message": "Jan  3 14:33:22 gauntlet dovecot: imap(jarif): Warning: Fixed a duplicate: /home/jarif/Maildir/.Confirmed-SPAM/cur/1483321241.M110582P25566.gauntlet,S=5633,W=5752:2,Sd -> 1483446802.M72959P14737.gauntlet,S=5633,W=5752",
          "offset": 8573164,
          "source": "/var/log/mail.log",
          "type": "mail"
        }
      },

I can't make a working query in Kibana console or whatever that UI is.

You configured the index to be filebeat. You probably wanted a filebeat-%{+yyyy.MM.dd} to get daily indexes like filebeat-2016.01.03.

Just remove the option "index: filebeat"

Thanks! It works now. Somehow the CentOS configs do have that index: filebeat and they still work.

But I'm happy now.

Probably you are using different Filebeat versions. Older versions did not support a configurable date so index: filebeat implicitly meant filebeat-yyyy.MM.dd.

1 Like

:thumbsup: good to know :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.