The filebeat dashboard : No results found

Hello,
i have 2 Servers (one is ELK and other is filebeat ) both are Centos 7.
i have installed ELK 5.4 "tar installation" on 1st server and filebeat 5.4 "tar installation" on 2nd server. i can see the log index in my Kibana discovery but when i switch to the visualize or the dashboard tab show me (No results found).
Also i have installed Xpack plugin for ELasticsearch and Kibana but made the (xpack.security.enabled: false) for both

***my logstash (input/filter/output) file "/usr/local/logstash/logstash.conf"
###################### <! INPUT !> ###############################
input {
beats {
port => 5044
}

stdin {

type => "stdin-type"

}

file {
type => "syslog"

# Wildcards work, here :)
path => [ "/var/log/message", "/var/log/secure" ]
start_position => "beginning"

}

file {
type => "apache"
path => [ "/usr/local/apache/logs//.log", "/usr/local/apache/logs/*_log" ]
start_position => "beginning"
}

}
####################### <! FILTER!> ##############################
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

filter {
if [path] =~ "access" {
mutate { replace => { "type" => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}

output {

stdout {codec => rubydebug}

elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

***my filebeat conf file (/usr/local/filebeat/filebeat.yml)
#========================== Modules configuration ============================
#filebeat.modules:

##------------------------------- System Module -------------------------------
modules:
2017/06/15 01:00:42.061635 metrics.go:34: INFO No non-zero metrics in the last 30s

  • name: mysql
  • name: syslog
    #=========================== Filebeat prospectors =============================

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

  • input_type: log

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/messages
    • /var/log/secure
      #- c:\programdata\elasticsearch\logs*
  • input_type: log
    paths:

    • /usr/local/apache/logs//.log
    • /usr/local/apache/logs/*_log
      fields:
      apache: true
      fields_under_root: true
      #----------------------------- Logstash output --------------------------------
      output.logstash:

    The Logstash hosts

    hosts: ["ELK_server_IP:5044"]

could u plz help me, i have spent 3 days searching on this issue but i didn't find anything valued :frowning:

  1. Please properly format logs and configuration files with </> button. The post is pretty unreadable

  2. Are you using filebeat modules? Filebeat modules do operate in conjunction with Elasticsearch right now. Having filebeat send to logstash, you currently can not take advantage of filebeat modules.

Hello Steffens,
Thnx for ur reply.
ys i tried to installed filebeats modules but gave me error because i enabled logstash output in filebeat.yml file so might this effect if ys how can i remove them.

regarding log format, sorry for that...kindly find below

***my logstash (input/filter/output) file "/usr/local/logstash/logstash.conf"

</################

input {
beats {
port => 5044
}

stdin {
type => "stdin-type"
}
file {
type => "syslog"
path => [ "/var/log/message", "/var/log/secure" ]
start_position => "beginning"
}

file {
type => "apache"
path => [ "/usr/local/apache/logs//.log", "/usr/local/apache/logs/*_log" ]
start_position => "beginning"
}

}

#################

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

filter {
if [path] =~ "access" {
mutate { replace => { "type" => "apache_access" } }
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}

output {

stdout {codec => rubydebug}
Elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}>

***my filebeat conf file (/usr/local/filebeat/filebeat.yml)

</========================== Modules configuration ============================
filebeat.modules:
------------------------------- System Module -------------------------------
modules:
2017/06/15 01:00:42.061635 metrics.go:34: INFO No non-zero metrics in the last 30s

  • name: mysql
  • name: syslog

=========================== Filebeat prospectors =============================
filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so
you can use different prospectors for various configurations.
Below are the prospector specific configurations.
input_type: log
paths:

  • /var/log/messages
  • /var/log/secure
  • input_type: log
    paths:
  • /usr/local/apache/logs//.log
  • /usr/local/apache/logs/*_log
    fields:
    apache: true
    fields_under_root: true

----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["ELK_server_IP:5044"] >

any updates regarding my issue

if you want to use filebeat modules, you can not use the Logstash output, but you have to use the Elasticsearch output.

Your logstash configuration is overly complex and seems to mix things not really fitting together... e.g. the elasticsearch output configuration doesn't work nicely with the other inputs you've defined in logstash.

From all these configs it's not clear to me what exactly you're planning to do and why you insist on using filebeat modules with logstash.

thnx Steffens. regarding the filebeat modules i have removed them.
actually i need to use beats >>logstash>>elasticsearch>>kibana.

regarding the 1st beat that i am planing to use (filebeat), if i remove my grok filer how i can visualize my logs like (message, apache, mysql) on kibana. i mean how can i use ur dashboard that attached in ur beats dashboard (like the below dashboard)

These dashboards are made to work with filebeat modules. The dashboards by default assume all data written to the filebeat index. If you translate the grok filters from the filebeat modules to logstash and have logstash write similar events to elasticsearch, the dashboards should pick up your data.

The filebeat modules source code can be found at: https://github.com/elastic/beats/tree/master/filebeat/module

For example see the mysql error log. The pipeline directory contains the ingest node pipeline definition.

As each module uses a different ingest pipeline, you might want to add some additional fields to the events (use fields setting in filebeat) for filtering in logstash. e.g.:

filebeat.prospectors:
- type: log
  fields.logtype: "mysqlerror"
  paths:
    - /var/log/mysql/error.log*
    - /var/log/mysqld.log*
  exclude_files: [".gz$"]
- type: log
  fields.logtype: "mysqlslow"
  paths:
    - /var/log/mysql/mysql-slow.log*
    - /var/lib/mysql/{{.builtin.hostname}}-slow.log
  exclude_files: ['.gz$']
  multiline:
    pattern: '^# User@Host: '
    negate: true
    match: after
  exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*']   # Exclude the header
...

I did derive the filebeat configuration, by expanding the filebeat module template by myself.

In logstash one could do the filtering/processing like:

input {
  beat {
    port => 5044
  }
}

filter {
  if [fields][logtype] == "mysqlerr" {
    ... # translate pipeline from https://github.com/elastic/beats/blob/master/filebeat/module/mysql/error/ingest/pipeline.json
  }
  if [fields][logtype] == "mysqlslow" {
    ...  # translate pipeline from https://github.com/elastic/beats/blob/master/filebeat/module/mysql/slowlog/ingest/pipeline.json
  }
}

translating pipelines can be non-trivial though (e.g. script filter using painless must be replaced with ruby filters and such). In logstash master branch I found a script doing some simple translation (not perfect, potentially incomplete): https://github.com/elastic/logstash/blob/master/bin/ingest-convert.sh

The current integration of beats modules and logstash is far from perfect. All in all we strive for full integration of modules in/with Logstash. But we're is just not there yet (I have no idea when we will be there).

</ Hi Steffens,

thnx for ur support and reply.
Actually i tried to read the mentioned links to create custom logstash filter but i can't understand them :frowning: .
so if i use filebeat& modules with elasticsearch directly without using logstash, the dashboard will pick up my data without any custom configration,right?
if yes, could u plz recommend the beat, elasticsearch and kibana version that i can use and it's recommended to use tar or rpm way.

Also i noticed the filebeat path only one log file, for example if i definded prospectors (/var/log/message & /var/log/secure& /usr/local/apache/log//.log), the filebeat.harvester open only one file like messege so in filebeat index i can see the logs for one file.
in log file i can see any error except
2017-06-18T00:02:30+01:00 ERR Failed to publish events caused by: write tcp ELK_IP:49410->ELK_IP:5044: write: connection reset by peer although the client can reach server via 5044 port
[root@s1 ~]# telnet ELK_IP 5044
Trying ELK_IP...
Connected to ELK_IP .
Escape character is '^]'.

filebeat.yml

filebeat.prospectors:

  • input_type: log
    paths:
    • /var/log/secure
    • /var/log/message
  • input_type: log
    paths:
    • "/usr/local/apache/log/*_log"
    • "/usr/local/apache/log//.log"
  • input_type: log
    paths:
    • /var/log/mysqld.log
      include_lines: ['^ERR', '^WARN']
      output.logstash:

    The Logstash hosts

    hosts: ["88.208.206.80:5044"]

logstash.conf

input {
beats {
port => 5044
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}>

so if i use filebeat& modules with elasticsearch directly without using logstash, the dashboard will pick up my data without any custom configration,right?

Yes. If files are at assumed location and log format is not changed in services configuration.

if yes, could u plz recommend the beat, elasticsearch and kibana version that i can use and it's recommended to use tar or rpm way.

modules are still a very new feature (still in beta) and are still being improved upon. I'd use the most recent versions of the complete stack. Currently version 5.4.1.
Personally I'd prefer rpm of tar, so I can use the systems packet management tools. Just for testing/playing with beats, tar files are ok though.

2017-06-18T00:02:30+01:00 ERR Failed to publish events caused by: write tcp ELK_IP:49410->ELK_IP:5044: write: connection reset by peer

Logstash is closing (supposed to be) idle connections. That is, here it is Logstash closing the connection. Depending on 'timing' this occurs (before sending), it can be ok or bad (if this happens when beats is waiting for ACK). Filebeat will automatically reconnect and send again. Updating logstash to most recent version and increasing the client_inconnectivity_timeout in the beats input normally helps. A bug in older logstash versions did sometimes close connections while Filebeat was waiting for ACK. Having this fixed, the error is tolerable, as filebeat will reconnect and send new events (no data loss).

Actually i tried to read the mentioned links to create custom logstash filter but i can't understand them

The link points to a script in the logstash development branch. You will need a development environment with Java (and maybe NodeJS) to build and run this script. Still, the script can not translate all filters in the pipeline configuration and it's a non-trivial task. I'd recommend using the ingest node pipeline. If you really need to use Logstash, but want to use the ingest pipeline from ES as well (it's somewhat inefficient as it duplicates some effort), here is another trick you can try:

filebeat.prospectors:
- type: log
  fields:
    logtype: "mysqlerror"
    pipeline: "mysqlerror"
  paths:
    - /var/log/mysql/error.log*
    - /var/log/mysqld.log*
  exclude_files: [".gz$"]
- type: log
  fields:
    logtype: "mysqlslow"
    pipeline: "mysqlslow"
  paths:
    - /var/log/mysql/mysql-slow.log*
    - /var/lib/mysql/{{.builtin.hostname}}-slow.log
  exclude_files: ['.gz$']
  multiline:
    pattern: '^# User@Host: '
    negate: true
    match: after
  exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*']   # Exclude the header

in logstash:

input {
  beat {
    port => 5044
  }
}

outputs {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    pipeline => "%{[fields][pipeline]}"
  }
}

Here we configure the fields.pipeline event field in filebeat and use the field in the logstash output to use the ingest pipeline configured in filebeat. The pipeline you will have to install by yourself using curl on the modules pipeline definition (shipped with filebeat). Look for the ingest directories in the modules file, to find the json file defining the pipeline. With curl, the file can be installed into ES using the ingest API as is. Again, this is a not so nice workaround. Better connect Filebeat directly to Elastchsearch if you want to use modules and the dashboards.

Hi Steffens,
really thanks for ur reply.
for the dashbboard issue, i have installed (elasticsearc, filebeat+modues and kibana) on test machine and some dashboards worked fine.

for the harvester ( the filebeat path only one log file, for example if i definded prospectors "/var/log/message & /var/log/secure& /usr/local/apache/log//.log",
the filebeat.harvester open only one file like message so in filebeat index i can see the logs for one file) issue actually i tried 2 scenarios

1st: increase the client_inconnectivity_timeout in the beats input
logstash.conf
input {
beats {
port => 5044
client_inactivity_timeout => 120
}
}

output {
Elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

2nd: remove logstash and redirect filebeat output to Elasticsearch directly.

although trying both, the issue still exist and i see only one log file in filebeat index so how can i see all log files that i defined in filebeat.yml

Have you checked the source field?

Also check the registry file (it's JSON formatted), for your files to be present and the file offsets. Maybe filebeat is thinking it did already process these files? Deleting single entries from registry file (or complete registry file), should make filebeat sending the logs again.

Have you checked filebeat logs? Filebeat normally logs a message when it starts reading a file and when it closes a file, due to missing updates.

Hello Steffens,
hoping all is good with u.... and thnx for ur support :slight_smile:

[ Have you checked the source field? ]
what do u mean source field? u mean the source files..?

[ Also check the registry file (it's JSON formatted), for your files to be present and the file offsets. Maybe filebeat is thinking it did already process these files? Deleting single entries from registry file (or complete registry file), should make filebeat sending the logs again. ]

where can i find the registry file ? and how can i delete it?

[ Have you checked filebeat logs? Filebeat normally logs a message when it starts reading a file and when it closes a file, due to missing updates. ]

ys i have check the logs. the below is part of them
2017-06-22T10:31:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:31:20+01:00 INFO Harvester started for file: /var/log/secure
2017-06-22T10:31:39+01:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.es.call_count.PublishEvents=1 libbeat.es.publish.read_bytes=339 libbeat.es.publish.write_bytes=557 libbeat.es.published_and_acked_events=1 libbeat.publisher.published_events=1 publish.events=2 registrar.states.update=2 registrar.writes=1
2017-06-22T10:32:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:32:39+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:33:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:33:39+01:00 INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=1 libbeat.es.publish.read_bytes=383 libbeat.es.publish.write_bytes=4189 libbeat.es.published_and_acked_events=11 libbeat.publisher.published_events=11 publish.events=11 registrar.states.update=11 registrar.writes=1
2017-06-22T10:34:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:34:39+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:35:09+01:00 INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=1 libbeat.es.publish.read_bytes=366 libbeat.es.publish.write_bytes=2361 libbeat.es.published_and_acked_events=6 libbeat.publisher.published_events=6 publish.events=6 registrar.states.update=6 registrar.writes=1
2017-06-22T10:35:39+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:36:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:36:39+01:00 INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=2 libbeat.es.publish.read_bytes=727 libbeat.es.publish.write_bytes=4392 libbeat.es.published_and_acked_events=11 libbeat.publisher.published_events=11 publish.events=11 registrar.states.update=11 registrar.writes=2
2017-06-22T10:37:09+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:37:39+01:00 INFO No non-zero metrics in the last 30s
2017-06-22T10:38:09+01:00 INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=1 libbeat.es.publish.read_bytes=335 libbeat.es.publish.write_bytes=558 libbeat.es.published_and_acked_events=1 libbeat.pu >

just found the registry file and removed it after that created with some of paths not all so where 's the remain paths :frowning: ?

registry file

[{"source":"/var/log/secure","offset":1081687,"FileStateOS":{"inode":9047291,"device":64768},"timestamp":"2017-06-21T22:42:18.401330821+01:00","ttl":-2},{"source":"/var/log/mysqld.log","offset":164974,"FileStateOS":{"inode":9044404,"device":64768},"timestamp":"2017-06-21T22:42:18.401332698+01:00","ttl":-2},{"source":"/var/log/secure","offset":9815004,"FileStateOS":{"inode":9047512,"device":64768},"timestamp":"2017-06-22T11:48:19.388508856+01:00","ttl":-2}]

Sorry, kind of lost track on this discussion. To not mix two issues into this discussion (the dashboard one was quite interesting), can you open another disucssion with your full actual filebeat configuration, logs and registry file (Please use the </> button in the editor it's toolbar)? Having a separate discussion, other users/devs knowing better about this problem might join in.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.