No logs appearing in Kibana

See title. There are no errors in any of the log files for filebeat/logstash/elasticsearch/kibana. My pipeline is starting up & running properly. I've used this pipeline before on a previous install so I can confirm it's not an issue with the pipeline. Where do I even begin to look if there are no errors in the logs?

I can provide more info if necessary.

Hey Jack,

I'd first off check if there's the data you expect in Elasticsearch by seeing what indices exist using the API: _cat/indices?v.

If you're not seeing any indices being created you could enable debug logging on your Elasticsearch logstash output (https://www.elastic.co/guide/en/logstash/current/logging.html), or even run a tcpdump from your logstash server to monitor outbound traffic to Elasticsearch.

You could do similar on your inputs to ensure Logstash is actually receiving any metrics. I'm not familiar enough with filebeat to know what it should log by default, but perhaps there's some logging you can adjust there to ensure it is reading from files correctly?

This is the output from _cat/indices?v

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-kibana-6-2018.04.05 TJe553NXQwqdTBvTEWZ9ew   1   0       5288            0      1.3mb          1.3mb
green  open   .kibana                         6UH8RnZmTIiMqdXq1Xfg0Q   1   0        142            0    137.2kb        137.2kb
green  open   .monitoring-es-6-2018.04.05     ZPrIJG9aQgiahb7v8_Zb_w   1   0     174669          416     92.8mb         92.8mb
green  open   .monitoring-es-6-2018.03.29     6w4U_0pTTRuKLjDfaMJt1Q   1   0     115694          243     54.9mb         54.9mb
green  open   .monitoring-alerts-6            y0WB5uz9TXClOuk75SNU6w   1   0         14            4     46.1kb         46.1kb
green  open   .monitoring-es-6-2018.03.30     kpaRDaFcSuilCHJ3Zn74Vg   1   0     233352          392    122.9mb        122.9mb
green  open   .watches                        ppV_k_anRNOamYHmt8DXRw   1   0          6            0    109.7kb        109.7kb
       close  .watcher-history-7-2018.03.24   GgiJ3qwCS0i0wn6tUyVC6g                                                          
       close  .watcher-history-7-2018.03.23   FFtSVJHbSoKqdd6eCagMtg                                                          
green  open   .watcher-history-7-2018.04.03   t8gwacmmQ8S3-jx-V8R4HQ   1   0      11435            0     12.7mb         12.7mb
green  open   .monitoring-kibana-6-2018.04.03 PbxUImZJTHWzVDcCY0fhbw   1   0       8463            0      2.2mb          2.2mb
green  open   .monitoring-es-6-2018.04.01     kQUL0OArQeCiBwbd89yzeg   1   0     250672          270    136.9mb        136.9mb
green  open   .monitoring-kibana-6-2018.03.31 IhR02HifQ2KgP_3SKFGCcw   1   0       8639            0      1.9mb          1.9mb
green  open   .monitoring-es-6-2018.04.03     31_9Yi6wRC2wHeeVWVlrwQ   1   0     267040          480    140.6mb        140.6mb
green  open   .monitoring-kibana-6-2018.04.01 qZ44PmBGTUirr7QRAU4sxg   1   0       8638            0      1.9mb          1.9mb
green  open   .monitoring-kibana-6-2018.04.02 zd1FI-x7S8CCeHlUk3scLQ   1   0       8638            0      1.9mb          1.9mb
       close  .watcher-history-7-2018.03.22   X2EoXeHgRdSJWB_i56bCJA                                                          
green  open   .watcher-history-7-2018.03.30   MVGe0ItPSWepMQxO-Q4urg   1   0      11504            0     12.5mb         12.5mb
green  open   .watcher-history-7-2018.04.01   Ru3M06XdQB2VApP2UUdMAA   1   0      11505            0     12.3mb         12.3mb
green  open   .monitoring-kibana-6-2018.03.29 Rzwb0e-kQHmHcVcxQQwugw   1   0       4002            0      1.1mb          1.1mb
yellow open   filebeat-6.2.3-2018.03.26       nVBIvoCYTVS8E-BoQzdBsw   3   1     532672            0     57.3mb         57.3mb
       close  .watcher-history-7-2018.03.26   RClYfOgAStuzIA-asYwUOQ                                                          
green  open   .triggered_watches              vXW6fOeGTdmsqKXPiiVKoQ   1   0          0            0      1.2mb          1.2mb
green  open   .watcher-history-7-2018.04.02   oTmSM2EsRoGM_45uDGEw8w   1   0      11503            0     12.3mb         12.3mb
green  open   .security-6                     jodfynroTHKsb8RHYTIlGg   1   0          3            0      9.9kb          9.9kb
green  open   .watcher-history-7-2018.03.31   NjDo8k2STfCsgq-uv-6erA   1   0      11513            0     12.5mb         12.5mb
green  open   .monitoring-es-6-2018.04.04     vdRHquHSR4SP3jcr_qqiSQ   1   0     276636          334    141.9mb        141.9mb
green  open   .monitoring-es-6-2018.03.31     JTVwbyt9TDKZl_JBl7tPVQ   1   0     241977          319    130.5mb        130.5mb
       close  .watcher-history-7-2018.03.25   K0YNbrhlQSCLWYGVRXkVog                                                          
       close  .watcher-history-7-2018.03.29   aZRUUUu2QaOc2Y7lEuqUCA                                                          
       close  .watcher-history-7-2018.03.28   45UodV-gQw2eLd8HzHtjug                                                          
       close  .watcher-history-7-2018.03.27   4Xx1vmL-Sii9Pz-zzA-BPw                                                          
green  open   .monitoring-es-6-2018.04.02     JxLl7p8JQGSIl--Hb4ZMvA   1   0     259287          310    143.7mb        143.7mb
green  open   .watcher-history-7-2018.04.04   dockgC65Qi-JnP6Mb7jlXQ   1   0      11490            0     12.6mb         12.6mb
green  open   .monitoring-kibana-6-2018.03.30 yeiGuMpESqaIwUJKe0horA   1   0       8638            0      1.9mb          1.9mb
green  open   .watcher-history-7-2018.04.05   7Gaoo7nQQyuSAYumeJMaug   1   0       7036            0      7.9mb          7.9mb
green  open   .monitoring-kibana-6-2018.04.04 7EyxXYF9RGq0uHIGfv-bBg   1   0       8638            0      2.1mb          2.1mb

I don't know what any of that means.

Cool, those are all your indices (Logical namespaces of data). The indices starting with a period are system indices, but it also looks like you have some regular indices too, i.e. filebeat-6.2.3-2018.03.26

I'm guessing filebeat-6.2.3-2018.03.26 is the index your data is being sent to, but we can check that. In Kibana if you go to the Management pane, and then Index Patterns, what do you see?

It might be you're querying one index pattern in Kibana, but your data is in another index; or you possibly don't have any index patterns configured (Which tells Kibana which indices to search in)

There are no indices in kibana other than "filebeat-*" - filebeat is the only place my data should be coming from but there is nothing under that index. Why is the date on the filebeat index 2018.03.26? This is a fresh install that I implemented on 2018.04.03 so I would expect that to be the date?

Thanks for being the first person to actually attempt to help me.

Ahh I didn't pay attention to the date on the index. You're right, you should have an index with todays date (Assuming your Logstash is configured as so).

Would you mind sharing your Logstash configuration, and perhaps also doing a tcpdump to see if any data is being sent to elasticsearch from Logstash (i.e. tcpdump -i any dst port 9200). This should help to see if the data is getting that far or not.

On the other side perhaps you could enable additional logging on your filebeat daemon (https://www.elastic.co/guide/en/beats/filebeat/current/enable-filebeat-debugging.html) to see if it is correctly harvesting the logs.

Cheers,
Mike.

So the filebeat index that exists in elastic, i'm assuming that's a default index that comes packaged with the software? As the date on that index is a week or two before I installed anything on this machine.

Logstash.yml below;

https://pastebin.com/6gnCtbSs

Logstash.conf below;

https://pastebin.com/7cpg2BQ5

Trying tcpdump gives me a command not found error.

What system are your Logstash nodes running on? You can install tcpdump via something like yum install -y tcpdump or apt-get install tcpdump

You could also test your Logstash configuration is valid by running the below (Although your Logstash binary might exist elsewhere): /usr/share/logstash/bin/logstash --config.test_and_exit -f <the config file/folder>

Another option is to enable debug logging on your Logstash beats input, which should spit some logs into /var/log/logstash:

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'

{
  "logger.logstash.inputs.beats" : "DEBUG"
}
'

You mean my OS? RedHat 5.9. The whole elastic stack is installed on this machine.

 $   /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash
    OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
    WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
    Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    [INFO ] 2018-04-05 17:08:06.034 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
    [INFO ] 2018-04-05 17:08:06.086 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
    [WARN ] 2018-04-05 17:08:06.744 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
    [FATAL] 2018-04-05 17:08:06.881 [LogStash::Runner] runner - The given configuration is invalid. Reason: Expected one of #, input, filter, output at line 6, column 1 (byte 132) after ## JVM configuration

    # Xms represents the initial size of total heap space
    # Xmx represents the maximum size of total heap space


    [ERROR] 2018-04-05 17:08:06.884 [LogStash::Runner] Logstash - java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit

If I specify path.settings I get this output instead;

 /usr/share/logstash/bin/logstash --path.settings /etc/logstash --config.test_and_exit -f /etc/logstash
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

I'm not sure how to set /etc/logstash as the default location? From what I understand it should be the default from the time I install, i've also tried setting it by editing the startup.options file and running the system-install script.

Just ran the TCPDUMP, this is some of the output (all of it looks the same to me)

16:17:46.687340 IP elastic01.e4bh.internal.48727 > elastic01.e4bh.internal.wap-wsp: Flags [S], seq 4077433096, win 43690, options [mss 65495,sackOK,TS val 782129677 ecr 0,nop,wscale 7], length 0
16:17:46.687361 IP elastic01.e4bh.internal.48727 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 4288140784, win 342, options [nop,nop,TS val 782129677 ecr 782129677], length 0
16:17:46.687861 IP elastic01.e4bh.internal.48727 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 0:136, ack 1, win 342, options [nop,nop,TS val 782129677 ecr 782129677], length 136
16:17:46.688408 IP elastic01.e4bh.internal.48727 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 1013, win 358, options [nop,nop,TS val 782129678 ecr 782129678], length 0
16:17:46.890205 IP elastic01.e4bh.internal.46567 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 14770, win 1024, options [nop,nop,TS val 782129880 ecr 782128879], length 0
16:17:46.892187 IP elastic01.e4bh.internal.46559 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 9097, win 1024, options [nop,nop,TS val 782129882 ecr 782128880], length 0
16:17:47.567487 IP elastic01.e4bh.internal.48728 > elastic01.e4bh.internal.wap-wsp: Flags [S], seq 905487167, win 43690, options [mss 65495,sackOK,TS val 782130557 ecr 0,nop,wscale 7], length 0
16:17:47.567508 IP elastic01.e4bh.internal.48728 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 2283896919, win 342, options [nop,nop,TS val 782130557 ecr 782130557], length 0
16:17:47.568460 IP elastic01.e4bh.internal.48728 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 0:136, ack 1, win 342, options [nop,nop,TS val 782130558 ecr 782130557], length 136
16:17:47.568998 IP elastic01.e4bh.internal.48728 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 1013, win 358, options [nop,nop,TS val 782130558 ecr 782130558], length 0
16:17:47.620812 IP elastic01.e4bh.internal.46566 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 5850:5981, ack 8783, win 1024, options [nop,nop,TS val 782130610 ecr 782129108], length 131
16:17:47.621029 IP elastic01.e4bh.internal.46566 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 8870, win 1024, options [nop,nop,TS val 782130610 ecr 782130610], length 0
16:17:47.621733 IP elastic01.e4bh.internal.46770 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 5587:5795, ack 8530, win 1024, options [nop,nop,TS val 782130611 ecr 782129108], length 208
16:17:47.622468 IP elastic01.e4bh.internal.46770 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 8742, win 1023, options [nop,nop,TS val 782130612 ecr 782130612], length 0
16:17:47.623082 IP elastic01.e4bh.internal.46409 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 5649:5827, ack 16416, win 1024, options [nop,nop,TS val 782130612 ecr 782129110], length 178
16:17:47.623681 IP elastic01.e4bh.internal.46409 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 16503, win 1024, options [nop,nop,TS val 782130613 ecr 782130613], length 0
16:17:47.624746 IP elastic01.e4bh.internal.47576 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 6757:6904, ack 19605, win 1024, options [nop,nop,TS val 782130614 ecr 782129112], length 147
16:17:47.625380 IP elastic01.e4bh.internal.47576 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 22349, win 1010, options [nop,nop,TS val 782130615 ecr 782130615], length 0
16:17:47.690191 IP elastic01.e4bh.internal.48727 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 1013, win 358, options [nop,nop,TS val 782130680 ecr 782129678], length 0
16:17:48.570196 IP elastic01.e4bh.internal.48728 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 1013, win 358, options [nop,nop,TS val 782131560 ecr 782130558], length 0
16:17:48.622200 IP elastic01.e4bh.internal.46566 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 8870, win 1024, options [nop,nop,TS val 782131612 ecr 782130610], length 0
16:17:48.624189 IP elastic01.e4bh.internal.46770 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 8742, win 1024, options [nop,nop,TS val 782131614 ecr 782130612], length 0
16:17:48.626188 IP elastic01.e4bh.internal.46409 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 16503, win 1024, options [nop,nop,TS val 782131616 ecr 782130613], length 0
16:17:48.626190 IP elastic01.e4bh.internal.47576 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 22349, win 1024, options [nop,nop,TS val 782131616 ecr 782130615], length 0
16:17:48.894592 IP elastic01.e4bh.internal.46567 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 7893:8470, ack 14770, win 1024, options [nop,nop,TS val 782131884 ecr 782129880], length 577
16:17:48.895271 IP elastic01.e4bh.internal.46567 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 14990, win 1023, options [nop,nop,TS val 782131885 ecr 782131885], length 0
16:17:48.895546 IP elastic01.e4bh.internal.46559 > elastic01.e4bh.internal.wap-wsp: Flags [P.], seq 7571:8158, ack 9097, win 1024, options [nop,nop,TS val 782131885 ecr 782129882], length 587
16:17:48.895828 IP elastic01.e4bh.internal.46559 > elastic01.e4bh.internal.wap-wsp: Flags [.], ack 9317, win 1023, options [nop,nop,TS val 782131885 ecr 782131885], length 0

So from the looks of things logstash isn't doing anything then?

If you run the tcpdump with the -A flag you should get a more human readable output... although still may not be all that readable; was more just intrigued at the volume (i.e. how many packets it captures in a certain window).

I believe /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash wasn't working for you as you're pointing it at your logstash settings (JVM bits, log paths etc.) rather than than your logstash config (Your pipeline definitions). If you're able to share the configs you have I can take a look.

Thinking about it though, it might be easier to hit the Logstash API to retrieve some info about your pipeline(s). For example if you curl http://localhost:9600/_node/stats/pipelines (Or whatever your host address is), you should get an output that shows all your inputs/filters/outputs, with a count of events in/out

The output of the curl command is as follows;

{"host":"elastic01.e4bh.internal","version":"6.2.3","http_address":"127.0.0.1:9600","id":"768552a1-e20d-4747-9c16-ac74aae91449","name":"elastic01.e4bh.internal","pipelines":{"main2":{"events":{"duration_in_millis":943,"in":2,"out":2,"filtered":2,"queue_push_duration_in_millis":0},"plugins":{"inputs":[{"id":"9ed109f0f33fe88bac8f290247f3ff134ea1a74b526111a2df1151ecafb52cf4","events":{"out":2,"queue_push_duration_in_millis":0},"current_connections":0,"name":"beats","peak_connections":1}],"filters":[],"outputs":[{"id":"1155115a9fd19fa23d2ee3f2edb6ced7f3315bae290a409639ab843d5720dbe0","events":{"duration_in_millis":938,"in":2,"out":2},"name":"elasticsearch"}]},"reloads":{"last_error":null,"successes":0,"last_success_timestamp":null,"last_failure_timestamp":null,"failures":0},"queue":{"type":"memory"}}}}[root@elastic01 TEST]

Looks like things are moving in/out. Which config files did you want me to send over? My logstash.conf and logstash.yml are on pastebin, the links are in an earlier reply. If you want to take a look at any others just let me know which ones you need.

Only two events have been received for the duration of that Logstash process, I'm assuming that's less than you'd expect?

I'm only trying to get a single log to import at the minute as a test. I've appended a few new lines to it once, to try to get it to update so that would make sense there being only two.

One thing I saw was to check filebeat is running by grepping /var/log/syslog - I'm on redhat so I don't have a syslog file, but grepping filebeat in /var/log/messages or /var/log/secure doesn't return anything. No entries for logstash either in/var/log/messages but there are logstash entries in /var/log/secure. So by the looks of things filebeat isn't running, even though doing service filebeat status returns the message filebeat-god (pid 29606) is running...

I'm not sure if that info helps at all?

I'd expect to see filebeat logs in somewhere like /var/log/filebeat/filebeat. You should also see lines for events being read/sent, like the below:

2018-04-06T16:09:50Z INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=6 libbeat.logstash.publish.read_bytes=36 libbeat.logstash.publish.write_bytes=17275 libbeat.logstash.published_and_acked_events=89 libbeat.publisher.published_events=89 publish.events=89 registrar.states.update=89 registrar.writes=6

Do you have any logs?

Yeah, here's an excerpt from filebeats log;

2018-04-09T08:41:19.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3121},"total":{"ticks":16210,"time":16218,"value":16210},"user":{"ticks":13090,"time":13097}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228540008}},"memstats":{"gc_next":4194304,"memory_alloc":1496128,"memory_total":1576235104}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0.01,"15":0.06,"5":0.05,"norm":{"1":0.01,"15":0.06,"5":0.05}}}}}}
2018-04-09T08:41:49.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3122},"total":{"ticks":16210,"time":16220,"value":16210},"user":{"ticks":13090,"time":13098}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228570008}},"memstats":{"gc_next":4194304,"memory_alloc":1701816,"memory_total":1576440792}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0.01,"15":0.05,"5":0.04,"norm":{"1":0.01,"15":0.05,"5":0.04}}}}}}
2018-04-09T08:42:19.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3122},"total":{"ticks":16220,"time":16222,"value":16220},"user":{"ticks":13100,"time":13100}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228600008}},"memstats":{"gc_next":4194304,"memory_alloc":1900928,"memory_total":1576639904}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0.01,"15":0.05,"5":0.04,"norm":{"1":0.01,"15":0.05,"5":0.04}}}}}}
2018-04-09T08:42:49.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3122},"total":{"ticks":16220,"time":16225,"value":16220},"user":{"ticks":13100,"time":13103}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228630008}},"memstats":{"gc_next":4194304,"memory_alloc":1284568,"memory_total":1576848448}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0,"15":0.05,"5":0.04,"norm":{"1":0,"15":0.05,"5":0.04}}}}}}
2018-04-09T08:43:19.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3122},"total":{"ticks":16220,"time":16226,"value":16220},"user":{"ticks":13100,"time":13104}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228660007}},"memstats":{"gc_next":4194304,"memory_alloc":1488112,"memory_total":1577051992}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0.21,"15":0.06,"5":0.08,"norm":{"1":0.21,"15":0.06,"5":0.08}}}}}}
2018-04-09T08:43:49.350+0100    INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3120,"time":3122},"total":{"ticks":16220,"time":16228,"value":16220},"user":{"ticks":13100,"time":13106}},"info":{"ephemeral_id":"9e5033a3-0537-49a9-833c-9fcf8833812a","uptime":{"ms":228690008}},"memstats":{"gc_next":4194304,"memory_alloc":1696728,"memory_total":1577260608}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":11}},"system":{"load":{"1":0.13,"15":0.06,"5":0.07,"norm":{"1":0.13,"15":0.06,"5":0.07}}}}}}

So it looks like things are moving into logstash at least?

{"harvester":{"open_files":0,"running":0}

That would seem to me that filebeat is not currently harvesting any files

Perhaps you could share your filebeat config, or also run something like /usr/share/filebeat/bin/filebeat -v -e -d "config" to get an idea of Filebeats loaded config at runtime

Pastebin link to my filebeat config below;

https://pastebin.com/xfXcKgdt

Also, running that command as suggested returns the error;

Exiting: error loading config file: stat filebeat.yml: no such file or directory

I assume "config" should be set to /etc/filebeat/filebeat.yml, but that gives me the same error.

"config" should be literal as it's part of -d to enable additional logging components.

You might need to set -path.config to point to wherever your configuration is placed