.990056 logp.go:232:INFO No non-zero metrics in the last 30s

Hi Guys,

i am getting non-zero metrics

filebeat_1 | 2018/04/30 08:54:50.988423 beat.go:177: INFO Setup Beat: filebeat; Version: 5.2.1
filebeat_1 | 2018/04/30 08:54:50.988521 logstash.go:90: INFO Max Retries set to: 3
filebeat_1 | 2018/04/30 08:54:50.988725 outputs.go:106: INFO Activated logstash as output plugin.
filebeat_1 | 2018/04/30 08:54:50.988759 outputs.go:106: INFO Activated console as output plugin.
filebeat_1 | 2018/04/30 08:54:50.988951 publish.go:291: INFO Publisher name: d96352b188ef
filebeat_1 | 2018/04/30 08:54:50.989464 async.go:63: INFO Flush Interval set to: 1s
filebeat_1 | 2018/04/30 08:54:50.989479 async.go:64: INFO Max Bulk Size set to: 2048
filebeat_1 | 2018/04/30 08:54:50.989535 async.go:63: INFO Flush Interval set to: 1s
filebeat_1 | 2018/04/30 08:54:50.989551 async.go:64: INFO Max Bulk Size set to: 2048
filebeat_1 | 2018/04/30 08:54:50.989709 beat.go:207: INFO filebeat start running.
filebeat_1 | 2018/04/30 08:54:50.989728 logp.go:219: INFO Metrics logging every 30s
filebeat_1 | 2018/04/30 08:54:50.989756 registrar.go:68: INFO No registry file found under: /data/registry. Creating a new registry file.
filebeat_1 | 2018/04/30 08:54:50.990014 registrar.go:106: INFO Loading registrar data from /data/registry
filebeat_1 | 2018/04/30 08:54:50.990054 registrar.go:123: INFO States Loaded from registrar: 0
filebeat_1 | 2018/04/30 08:54:50.990071 crawler.go:34: INFO Loading Prospectors: 1
filebeat_1 | 2018/04/30 08:54:50.990207 crawler.go:48: INFO Loading Prospectors completed. Number of prospectors: 1
filebeat_1 | 2018/04/30 08:54:50.990220 crawler.go:63: INFO All prospectors are initialised and running with 0 states to persist
filebeat_1 | 2018/04/30 08:54:50.990232 registrar.go:236: INFO Starting Registrar
filebeat_1 | 2018/04/30 08:54:50.990259 sync.go:41: INFO Start sending events to output
filebeat_1 | 2018/04/30 08:54:50.990290 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
filebeat_1 | 2018/04/30 08:54:50.990312 prospector.go:112: INFO Starting prospector of type: stdin
filebeat_1 | 2018/04/30 08:54:50.990358 log.go:84: INFO Harvester started for file: -
filebeat_1 | Processing test.1.31tk5935ub2ryb4fpvwi62651 ...
filebeat_1 | Processing resources_elk_1 ...
filebeat_1 | Processing myfilebeat_filebeat_1 ...
filebeat_1 | 2018/04/30 08:55:20.990199 logp.go:230: INFO Non-zero metrics in the last 30s: filebeat.harvester.running=1 registrar.writes=1 filebeat.harvester.started=1
filebeat_1 | 2018/04/30 08:55:50.990080 logp.go:232: INFO No non-zero metrics in the last 30s

help me!

Hi,

This message is INFO for monitoring with logstash. i'm not able to see any error here.

could you please check logstash instance working or not and also share the logs and give some more input about your use case for further investigation.

Logs: /var/log/logstash

Thanks,

Hi @Harsh Bajaj

Thanks for reply, here is my logstash log

elk_1 | ==> /var/log/logstash/logstash-plain.log <==
elk_1 | [2018-04-30T10:35:22,379][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/opt/logstash/modules/fb_apache/configuration"}
elk_1 | [2018-04-30T10:35:22,386][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/opt/logstash/modules/netflow/configuration"}
elk_1 | [2018-04-30T10:35:22,391][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
elk_1 | [2018-04-30T10:35:22,393][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
elk_1 | [2018-04-30T10:35:22,426][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"18d43a76-2da4-4776-b4f9-c90678ee5831", :path=>"/opt/logstash/data/uuid"}
elk_1 | [2018-04-30T10:35:23,309][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
elk_1 | [2018-04-30T10:35:23,310][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
elk_1 | [2018-04-30T10:35:23,488][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
elk_1 | [2018-04-30T10:35:23,489][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
elk_1 | [2018-04-30T10:35:23,649][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
elk_1 | [2018-04-30T10:35:24,284][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
elk_1 | [2018-04-30T10:35:24,341][INFO ][logstash.pipeline ] Pipeline main started
elk_1 | [2018-04-30T10:35:24,360][INFO ][org.logstash.beats.Server] Starting server on port: 5044
elk_1 | [2018-04-30T10:35:24,410][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

My logstashconfig ---

input {
beats {
port => 5044
type => "filebeat-docker-logs"
codec => json
}
}

filter {
if [docker][name] =~ "relaxed_goldwasser" or [docker][image] =~ "linux_tweet_app" {
grok {
break_on_match => false
match => [ "message", "(?<linux_tweet_app>(?<=\s\sMS:\s)([\S]*))" ]
tag_on_failure => []
}
}
}

filter {
if [docker][name] =~ "elastic_wing" or [docker][name] =~ "alpine" {
grok {
break_on_match => false
match => [ "message", "(?(?<=\s|\sAPI:\s)([\S]*))" ]
tag_on_failure => []
}
}
}

Hi,

Thanks for sharing this could you please explain a bit what is the issue with this as your logs showing beats is working fine.

i'm not able to understand what you are trying to do and what is the issue.

Thanks,

Hi @Harsh Bajaj

Problem is I am not getting the logs from elasticsearch , i am getting like this..

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana K-6kAvgpSZen1G-IJmf20g 1 1 1 0 3.2kb 3.2kb

I think this is default kibana output right..

myoutput.conf:

output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Hi,

Yes this is default index.

Could you please paste your filebeat.conf file here so that we can check the harvester with log files configuration.

conf file path : /etc/filebeat/filebeat.yml

Thanks,
Harsh Bajaj

Hi @Harsh Bajaj,
This is my filebeat.yml :

filebeat:
prospectors:
- paths:
- /var/log/*.log
input_type: "stdin"
document_type: log
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
registry_file: ./data/registry
output:
console:
pretty: true
logstash:
hosts: ["${LOGSTASH_HOST:22.99.132.1}:${LOGSTASH_PORT:5044}"]
index: filebeat

docker-compose:

version: '3.1'
services:
filebeat:
image: filebeat:5
environment:
LOGSTASH_HOST: "23.55.34.7"
LOGSTASH_PORT: "5044"
STDIN_CONTAINER_LABEL: "all"
networks:
- mynet
volumes:
- "/root/myfilebeat/filebeat.yml:/etc/filebeat/filebeat.yml:rw"
- "/var/run/docker.sock:/tmp/docker.sock:ro"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.