Logstash not consuming logs from filebeat

I have looked over all the threads and still have not been able to figure this out.

Here is the output from filebeat debug

On the logstash server when I run tcpdump I can see that it is communicating.

The logs are working for 4 other servers. This server is a new configuration. The other servers have filebeat running in docker containers. This server does not.

the logstash-plain.log doesn't exist so I don't know how to troubleshoot on logstash side.

I'm unsure why logstash is not consuming the files that are being sent.

Christine

Hi, the output from the filebeat debug did not come through. Would you mind posting it again?

And could you also post your entire filebeat.yml from the server that is not working, please?

I assume all 5 filebeat servers are sending data to the same Logstash instance?

`#

filebeat.prospectors :

output.logstash :
hosts : [ "logstash:443" ]
ssl.certificate_authorities:
- /etc/filebeat/certs/ca.crt
`
The debug goes by pretty fast:

2018-07-28T14:13:49.843Z INFO input/input.go:122 input ticker stopped
2018-07-28T14:13:49.843Z INFO input/input.go:139 Stopping Input: 2127937883533005066
2018-07-28T14:13:49.843Z DEBUG [publish] pipeline/processor.go:291 Publish event: {
"@timestamp": "2018-07-28T14:13:49.843Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.3.2"
},
"source": "/var/www/ncc-web/var/logs/nginx-balancer-access.log",
"fields": {
"domain": "centos",
"platform": "centos",
"platform_version": "7.3.1611",
"ipaddress": "",
"fqdn": "prod-lb-01.4zeo3zr051putbv31irsugf5ge.bx.internal.cloudapp.net",
"hostname": "prod-lb-01"
},
"prospector": {
"type": "log"
},
"host": {
"name": "prod-lb-01"
},
"offset": 1086210,
"message": "130.39.80.224 - - [28/Jul/2018:06:55:16 +0000] "POST /api/2.0/device/register-domain-device/ HTTP/1.1" 200 872 "-" "Mozilla/5.0"",
"tags": [
"nginx",
"loadbalancer"
],
"input": {
"type": "log"
},
"beat": {
"version": "6.3.2",
"name": "prod-lb-01",
"hostname": "prod-lb-01"
}
}
2018-07-28T14:13:49.843Z DEBUG [harvester] log/harvester.go:484 Stopping harvester for file: /var/www/ncc-web/var/logs/nginx-balancer-access.log
2018-07-28T14:13:49.843Z DEBUG [harvester] log/harvester.go:494 Closing file: /var/www/ncc-web/var/logs/nginx-balancer-access.log
2018-07-28T14:13:49.843Z DEBUG [harvester] log/harvester.go:364 Update state: /var/www/ncc-web/var/logs/nginx-balancer-access.log, offset: 1086339
2018-07-28T14:13:49.843Z DEBUG [harvester] log/harvester.go:505 harvester cleanup finished for file: /var/www/ncc-web/var/logs/nginx-balancer-access.log
2018-07-28T14:13:49.843Z DEBUG [publish] pipeline/client.go:131 client: closing acker
2018-07-28T14:13:49.843Z DEBUG [publish] pipeline/client.go:133 client: done closing acker
2018-07-28T14:13:49.843Z DEBUG [publish] pipeline/client.go:137 client: cancelled 0 events
2018-07-28T14:13:49.843Z INFO crawler/crawler.go:135 Crawler stopped
2018-07-28T14:13:49.843Z INFO registrar/registrar.go:339 Stopping Registrar
2018-07-28T14:13:49.843Z INFO registrar/registrar.go:265 Ending Registrar
2018-07-28T14:13:49.843Z DEBUG [registrar] registrar/registrar.go:383 Write registry file: /usr/share/filebeat/bin/data/registry
2018-07-28T14:13:49.855Z DEBUG [registrar] registrar/registrar.go:376 Registry file updated. 1 states written.
2018-07-28T14:13:49.857Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":{"ms":75}},"total":{"ticks":190,"time":{"ms":200},"value":190},"user":{"ticks":120,"time":{"ms":125}}},"info":{"ephemeral_id":"c0aa770d-c25d-4a32-9747-efd7ddaf4e77","uptime":{"ms":604}},"memstats":{"gc_next":6185088,"memory_alloc":3889264,"memory_total":17445664,"rss":15585280}},"filebeat":{"events":{"active":1776,"added":1778,"done":2},"harvester":{"closed":1,"open_files":0,"running":0,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":1776,"filtered":2,"published":1776,"total":1778}}},"registrar":{"states":{"current":1,"update":2},"writes":{"success":3,"total":3}},"system":{"cpu":{"cores":2},"load":{"1":0.1,"15":0.05,"5":0.04,"norm":{"1":0.05,"15":0.025,"5":0.02}}}}}}
2018-07-28T14:13:49.857Z INFO [monitoring] log/log.go:133 Uptime: 606.436828ms
2018-07-28T14:13:49.857Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
/>

Did you see my reply with the log output?

Hi,

Thanks for the log. I looked at it and nothing jumps out as odd.

Could you stop Filebeat on the "bad" server, then start it up and post the log output right after it starts up until the first "publish event" message in the log?

Thanks,

Shaunak

As it turns out, it is working but sporadically. There seems to be a pattern but for the life of me I’m not sure where I need to look to figure out why it isn’t sending them out continuously. Any ideas where I need to look?

Oh, that's interesting. Any chance you have logs (either from Filebeat or LS or both) for the downtime periods? Maybe there's something in there that'll give us a clue.

I find it interesting too. But where do I look for downtime. I've inherited this environment.

I looked at the other servers, all web servers. And clearly I made some change that is causing this downtime... but I don't know where to look... I recently made changes to RabbitMQ on all the web servers, but not on the load balancer so that can't be it.

Where would I look for whether or not I have "logs (either from Filebeat or LS or both) for the downtime periods"?

This is such a weird thing.

Christine

It looks like filebeat is sending them to logstash. But then I can't tell what logstash is doing with them. On further inspection, it appears to be only the loadbalancer logs and the not web server logs.

As I stated before I have no way to look at any logs on elasticsearch since there isn't one. Here is the conf file.

[root@d85aea0a1d52 logstash]# more /usr/share/logstash/pipeline/logstash.conf
input {
beats {
port => 5400
ssl => true
ssl_key => '/usr/share/logstash/config/certs/service.key'
ssl_certificate => '/usr/share/logstash/config/certs/service.crt'
}
}

filter {
if "nginx" in [tags] {
grok {
patterns_dir => [ "/opt/patterns" ]
match => [ "message", "%{NGINX_LOG}" ]
}
}
if "loadbalancer" in [tags] {
grok {
patterns_dir => [ "/opt/patterns" ]
match => [ "message", "%{LB_NGINX_LOG}" ]
}
}

if "webapp" in [tags] {
grok {
patterns_dir => [ "/opt/patterns" ]
match => [ "message", "%{WEBAPP_LOG}" ]
}
date {
match => [ "timestamp", "ISO8601", "YYYY-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
}

if "smg" in [tags] {
grok {
patterns_dir => [ "/opt/patterns" ]
match => [ "message", "%{SMG_LOG}" ]
}
date {
match => [ "timestamp", "ISO8601", "YYYY-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
}
mutate {
convert => { "response_code" => "integer" }
convert => { "request_time" => "float" }
convert => { "request_length" => "integer" }
convert => { "status" => "integer" }
convert => { "bytes_sent" => "integer" }
convert => { "body_bytes_sent" => "integer" }
convert => { "upstream_status" => "integer" }
convert => { "upstream_response_time" => "float" }
convert => { "upstream_connect_time" => "float" }
convert => { "upstream_header_time" => "float" }
}
}

output {
elasticsearch {
hosts => [ "https://elasticsearch:443" ]
cacert => '/usr/share/logstash/config/certs/ca.crt'
user => 'elastic'
password => ''
}
}

Is there anything I can check on the logstash container or elasticsearch to figure out what is going on?

Here is a tail of filebeat during the downtime, which is now.

2018-07-31T15:39:36.842Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5210,"time":{"ms":15}},"total":{"ticks":22890,"time":{"ms":103},"value":22890},"user":{"ticks":17680,"time":{"ms":88}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6210013}},"memstats":{"gc_next":8073024,"memory_alloc":4044840,"memory_total":7480296048}},"filebeat":{"events":{"active":-11,"added":177,"done":188},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":188,"batches":21,"total":188},"read":{"bytes":735},"write":{"bytes":23520}},"pipeline":{"clients":3,"events":{"active":0,"published":177,"total":177},"queue":{"acked":188}}},"registrar":{"states":{"current":3,"update":188},"writes":{"success":21,"total":21}},"system":{"load":{"1":0.14,"15":0.06,"5":0.07,"norm":{"1":0.07,"15":0.03,"5":0.035}}}}}}
2018-07-31T15:40:06.843Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5230,"time":{"ms":29}},"total":{"ticks":22980,"time":{"ms":96},"value":22980},"user":{"ticks":17750,"time":{"ms":67}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6240014}},"memstats":{"gc_next":5062416,"memory_alloc":2537656,"memory_total":7511785656}},"filebeat":{"events":{"active":8,"added":179,"done":171},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":171,"batches":20,"total":171},"read":{"bytes":700},"write":{"bytes":23201}},"pipeline":{"clients":3,"events":{"active":8,"published":179,"total":179},"queue":{"acked":171}}},"registrar":{"states":{"current":3,"update":171},"writes":{"success":20,"total":20}},"system":{"load":{"1":0.08,"15":0.06,"5":0.06,"norm":{"1":0.04,"15":0.03,"5":0.03}}}}}}
2018-07-31T15:40:36.842Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5260,"time":{"ms":25}},"total":{"ticks":23080,"time":{"ms":98},"value":23080},"user":{"ticks":17820,"time":{"ms":73}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6270013}},"memstats":{"gc_next":7986656,"memory_alloc":5549456,"memory_total":7546075584}},"filebeat":{"events":{"active":-8,"added":127,"done":135},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":135,"batches":22,"total":135},"read":{"bytes":770},"write":{"bytes":22265}},"pipeline":{"clients":3,"events":{"active":0,"published":127,"total":127},"queue":{"acked":135}}},"registrar":{"states":{"current":3,"update":135},"writes":{"success":22,"total":22}},"system":{"load":{"1":0.05,"15":0.05,"5":0.06,"norm":{"1":0.025,"15":0.025,"5":0.03}}}}}}
2018-07-31T15:41:06.842Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5290,"time":{"ms":25}},"total":{"ticks":23180,"time":{"ms":96},"value":23180},"user":{"ticks":17890,"time":{"ms":71}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6300015}},"memstats":{"gc_next":7996160,"memory_alloc":5683056,"memory_total":7579114104}},"filebeat":{"events":{"active":9,"added":170,"done":161},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":161,"batches":21,"total":161},"read":{"bytes":735},"write":{"bytes":22013}},"pipeline":{"clients":3,"events":{"active":9,"published":170,"total":170},"queue":{"acked":161}}},"registrar":{"states":{"current":3,"update":161},"writes":{"success":21,"total":21}},"system":{"load":{"1":0.03,"15":0.05,"5":0.05,"norm":{"1":0.015,"15":0.025,"5":0.025}}}}}}
2018-07-31T15:41:36.842Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5320,"time":{"ms":31}},"total":{"ticks":23280,"time":{"ms":104},"value":23280},"user":{"ticks":17960,"time":{"ms":73}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6330014}},"memstats":{"gc_next":7995648,"memory_alloc":4088200,"memory_total":7613481224,"rss":-32768}},"filebeat":{"events":{"active":-9,"added":138,"done":147},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":147,"batches":22,"total":147},"read":{"bytes":770},"write":{"bytes":21299}},"pipeline":{"clients":3,"events":{"active":0,"published":138,"total":138},"queue":{"acked":147}}},"registrar":{"states":{"current":3,"update":147},"writes":{"success":22,"total":22}},"system":{"load":{"1":0.02,"15":0.05,"5":0.05,"norm":{"1":0.01,"15":0.025,"5":0.025}}}}}}
2018-07-31T15:42:06.842Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":5340,"time":{"ms":23}},"total":{"ticks":23390,"time":{"ms":104},"value":23390},"user":{"ticks":18050,"time":{"ms":81}}},"info":{"ephemeral_id":"7d3dab48-8ba4-4b49-b691-fcc616315213","uptime":{"ms":6360014}},"memstats":{"gc_next":8022560,"memory_alloc":4100696,"memory_total":7646426088}},"filebeat":{"events":{"active":3,"added":164,"done":161},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":161,"batches":21,"total":161},"read":{"bytes":735},"write":{"bytes":22228}},"pipeline":{"clients":3,"events":{"active":3,"published":164,"total":164},"queue":{"acked":161}}},"registrar":{"states":{"current":3,"update":161},"writes":{"success":21,"total":21}},"system":{"load":{"1":0.01,"15":0.05,"5":0.04,"norm":{"1":0.005,"15":0.025,"5":0.02}}}}}}

Do you have Logstash statistics from x-pack monitoring? https://www.elastic.co/guide/en/x-pack/5.2/monitoring-logstash.html

Those should let you see how many events are coming into, and out of, Logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.