NFS Mount for Solaris Server

I have loaded up an Ubuntu Server. NFS mounted directory that hold logs of our glassfish application servers that run on solaris. added filebeats and point to the location of the logs.

#=========================== Filebeat inputs =============================


# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true 

  # Paths that should be crawled and fetched. Glob based paths.
    - /chill/local/gfuser3/glassfish3/glassfish/nodes/localhost-beta/portal/logs

Start filebeat and everything seems to run fine. No errors but not seeing any of these logs in kibana.

just see same thing over and over again. What am I doing wrong?

metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":410,"time":{"ms":4}},"total":{"ticks":890,"time":{"ms":12},"value":890},"user":{"ticks":480,"time":{"ms":8}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":10},"info":{"ephemeral_id":"3620bef7-e42a-4b70-9f81-d15fb4a0bd26","uptime":{"ms":3240033}},"memstats":{"gc_next":9812128,"memory_alloc":5006416,"memory_total":47892056},"runtime":{"goroutines":24}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":3}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}

Hey @kawalec,

How are you running filebeat? Does it have permissions to read these log files?

I have the NFS mount set to read only . Not sure what mean by how am I running filebeat. I sudo and then run ./filebeat -e

Ok, if you are running as root I guess that this is not a problem with permissions. Could you run filebeat with debug logging to see if it shows some more information? It would be sudo filebeat run -e -E logging.level=debug.

Networked filesystems are tricky, sometimes filebeat is able to find the files, but it cannot detect when they are being written, or they change their internal identifiers and filebeat loses track of them.
Look at this issue for example

Did get result in Kibana so that is good. Most of our logs exist on Solaris or non filebeat supported servers. I read this was a possible solution. This or running an Rsync. We are in infancy of using elastic and have had no luck setting up logstach with any of our servers yet. This was my attempt at work around. Anyways here is log from kibana. Look correct , right? And thanks for help

"_index": "filebeat-7.6.0-2020.05.07-000001",
"_type": "_doc",
"_id": "N97JA3IBWFzWa2l1zf3U",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2020-05-11T12:50:52.081Z",
"message": "|#]",
"input": {
"type": "log"
"agent": {
"type": "filebeat",
"ephemeral_id": "843a5340-a02f-4867-bde4-dc911e55a7eb",
"hostname": "operations",
"id": "a2e9707f-9149-4bc4-bdf9-58cfc4120f19",
"version": "7.6.0"
"ecs": {
"version": "1.4.0"
"host": {
"containerized": false,
"hostname": "operations",
"architecture": "x86_64",
"name": "operations",
"os": {
"version": "20.04 LTS (Focal Fossa)",
"family": "debian",
"name": "Ubuntu",
"kernel": "5.4.0-29-generic",
"codename": "focal",
"platform": "ubuntu"
"id": "42a852d7b1404d9aaf765593c9b634b3"
"log": {
"offset": 824474,
"file": {
"path": "/chill/local/gfuser3/glassfish3/glassfish/nodes/localhost-beta/portal/logs/server.log"
"fields": {
"suricata.eve.timestamp": [
"@timestamp": [
"sort": [

Yes, this event looks fine. So the problem is that it collects some lines but not others? This is the typical kind of issues found when using remote file systems.

If a network filesystem doesn't work in this case, could it be an option for you to use syslog? Filebeat can receive logs using the syslog input.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.