Filebeat --> Logstash works but directly to Elasticsearch = nothing

Hi. Filebeat can grab logs via the prospectors and send to Logstash then to Elasticsearch no problem but when I change the filebeat.yml to send directly to Elasticsearch I am not getting any output.

Can anybody shed some light on what else I can look at to get this working.. Hate to send logs through Logstash for nothing when they can just go directly to Elasticsearch.

Thanks.

This is how I have my filebeat setup:

> #-------------------------- Elasticsearch output ------------------------------
> output.elasticsearch:
>   # Array of hosts to connect to.
>   hosts: ["kib01:9200","kib02:9200"]
> 
>   # Optional protocol and basic auth credentials.
>   #protocol: "https"
>   #username: "elastic"
>   #password: "changeme"

And these are what the logs say (I have enabled some debugging to assist):
/var/log/filebeat/filebeat.

2018-03-11T10:37:25.579+1100    INFO    instance/beat.go:468    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-11T10:37:25.580+1100    DEBUG   [beat]  instance/beat.go:495    Beat metadata path: /var/lib/filebeat/meta.json
2018-03-11T10:37:25.580+1100    INFO    instance/beat.go:475    Beat UUID: b297f577-de30-410e-93dd-14ad8e15230e
2018-03-11T10:37:25.580+1100    INFO    instance/beat.go:213    Setup Beat: filebeat; Version: 6.2.2
2018-03-11T10:37:25.580+1100    DEBUG   [beat]  instance/beat.go:230    Initializing output plugins
2018-03-11T10:37:25.580+1100    DEBUG   [processors]    processors/processor.go:49      Processors: 
2018-03-11T10:37:25.580+1100    INFO    elasticsearch/client.go:145     Elasticsearch url: http://kib01:9200
2018-03-11T10:37:25.580+1100    INFO    elasticsearch/client.go:145     Elasticsearch url: http://kib02:9200
2018-03-11T10:37:25.580+1100    INFO    pipeline/module.go:76   Beat name: els01
2018-03-11T10:37:25.580+1100    INFO    instance/beat.go:301    filebeat start running.
2018-03-11T10:37:25.580+1100    INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-03-11T10:37:25.580+1100    DEBUG   [registrar]     registrar/registrar.go:88       Registry file set to: /var/lib/filebeat/registry
2018-03-11T10:37:25.580+1100    INFO    registrar/registrar.go:108      Loading registrar data from /var/lib/filebeat/registry
2018-03-11T10:37:25.580+1100    INFO    registrar/registrar.go:119      States Loaded from registrar: 31
2018-03-11T10:37:25.580+1100    INFO    crawler/crawler.go:48   Loading Prospectors: 1
2018-03-11T10:37:25.581+1100    DEBUG   [processors]    processors/processor.go:49      Processors: 
2018-03-11T10:37:25.581+1100    DEBUG   [prospector]    log/config.go:178       recursive glob enabled
2018-03-11T10:37:25.581+1100    DEBUG   [prospector]    log/prospector.go:120   exclude_files: []. Number of stats: 31
2018-03-11T10:37:25.581+1100    DEBUG   [prospector]    file/state.go:82        New state added for /var/log/vmware-install.log
2018-03-11T10:37:25.581+1100    DEBUG   [registrar]     registrar/registrar.go:150      Starting Registrar
2018-03-11T10:37:25.581+1100    DEBUG   [registrar]     registrar/registrar.go:200      Processing 1 events
2018-03-11T10:37:25.581+1100    DEBUG   [registrar]     registrar/registrar.go:193      Registrar states cleaned up. Before: 31, After: 31
2018-03-11T10:37:25.581+1100    DEBUG   [registrar]     registrar/registrar.go:228      Write registry file: /var/lib/filebeat/registry
2018-03-11T10:37:25.581+1100    DEBUG   [prospector]    file/state.go:82        New state added for /var/log/fontconfig.log

Hi @jamesl,

There are a few things you can check:

  • I don't see any error in the log output, do you get any error after it runs for a while?
  • Did you check Elasticsearch log?
  • Are new lines being written to the log file? Filebeat won't send nothing if it doesn't see new lines

Best regards

Hi Exekias.
I have limited filebeat to only look at /var/log/auth.log and then restarted filebeat. Below is what logs filebeat outputs... I will enable some debugging on Elasticsearch. What level do you recommend?

2018-03-13T07:14:02.981+1100	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":16},"total":{"ticks":10,"time":24,"value":10},"user":{"ticks":0,"time":8}},"info":{"ephemeral_id":"cce21048-8df6-4611-b1e6-ba9810bc4f83","uptime":{"ms":30038}},"memstats":{"gc_next":4473924,"memory_alloc":3050040,"memory_total":3050040,"rss":21831680}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":74,"update":1},"writes":1},"system":{"cpu":{"cores":2},"load":{"1":0.01,"15":0.01,"5":0.04,"norm":{"1":0.005,"15":0.005,"5":0.02}}}}}}
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	prospector/prospector.go:124	Run prospector
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	log/prospector.go:147	Start next scan
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	log/prospector.go:361	Check file for harvesting: /var/log/auth.log
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	log/prospector.go:447	Update existing file for harvesting: /var/log/auth.log, offset: 44658
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	log/prospector.go:456	Resuming harvesting of file: /var/log/auth.log, offset: 44658, new size: 45001
2018-03-13T07:14:02.983+1100	DEBUG	[harvester]	log/harvester.go:442	Set previous offset for file: /var/log/auth.log. Offset: 44658 
2018-03-13T07:14:02.983+1100	DEBUG	[harvester]	log/harvester.go:433	Setting offset for file: /var/log/auth.log. Offset: 44658 
2018-03-13T07:14:02.983+1100	DEBUG	[harvester]	log/harvester.go:348	Update state: /var/log/auth.log, offset: 44658
2018-03-13T07:14:02.983+1100	DEBUG	[prospector]	log/prospector.go:168	Prospector states cleaned up. Before: 1, After: 1
2018-03-13T07:14:02.984+1100	INFO	log/harvester.go:216	Harvester started for file: /var/log/auth.log
2018-03-13T07:14:02.984+1100	DEBUG	[registrar]	registrar/registrar.go:200	Processing 1 events
2018-03-13T07:14:02.984+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-12T20:14:02.984Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.2"
  },
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.2"
  },
  "source": "/var/log/auth.log",
  "offset": 44769,
  "message": "Mar 13 07:14:00 els01 sshd[11449]: Received disconnect from 192.168.10.101 port 59316:11: disconnected by user"
}
2018-03-13T07:14:02.984+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-12T20:14:02.984Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.2"
  },
  "source": "/var/log/auth.log",
  "offset": 44848,
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.2"
  },
  "message": "Mar 13 07:14:00 els01 sshd[11449]: Disconnected from 192.168.10.101 port 59316"
}
2018-03-13T07:14:02.984+1100	DEBUG	[registrar]	registrar/registrar.go:193	Registrar states cleaned up. Before: 74, After: 74
2018-03-13T07:14:02.984+1100	DEBUG	[registrar]	registrar/registrar.go:228	Write registry file: /var/lib/filebeat/registry
2018-03-13T07:14:02.984+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-12T20:14:02.984Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.2"
  },
  "offset": 44936,
  "message": "Mar 13 07:14:00 els01 sshd[11449]: pam_unix(sshd:session): session closed for user root",
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.2"
  },
  "source": "/var/log/auth.log"
}
2018-03-13T07:14:02.984+1100	DEBUG	[publish]	pipeline/processor.go:275	Publish event: {
  "@timestamp": "2018-03-12T20:14:02.984Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.2.2"
  },
  "source": "/var/log/auth.log",
  "offset": 45001,
  "message": "Mar 13 07:14:00 els01 systemd-logind[1125]: Removed session 626.",
  "prospector": {
    "type": "log"
  },
  "beat": {
    "name": "els01",
    "hostname": "els01",
    "version": "6.2.2"
  }
}
2018-03-13T07:14:02.984+1100	DEBUG	[harvester]	log/log.go:85	End of file reached: /var/log/auth.log; Backoff now.
2018-03-13T07:14:02.985+1100	DEBUG	[registrar]	registrar/registrar.go:253	Registry file updated. 74 states written.
2018-03-13T07:14:03.984+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:666	ES Ping(url=http://kib02:9200)
2018-03-13T07:14:03.984+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:666	ES Ping(url=http://kib01:9200)
2018-03-13T07:14:03.984+1100	DEBUG	[harvester]	log/log.go:85	End of file reached: /var/log/auth.log; Backoff now.
2018-03-13T07:14:03.986+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:689	Ping status code: 200
2018-03-13T07:14:03.986+1100	INFO	elasticsearch/client.go:690	Connected to Elasticsearch version 6.2.2
2018-03-13T07:14:03.986+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:708	HEAD http://kib01:9200/_template/filebeat-6.2.2  <nil>
2018-03-13T07:14:03.987+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:689	Ping status code: 200
2018-03-13T07:14:03.987+1100	INFO	elasticsearch/client.go:690	Connected to Elasticsearch version 6.2.2
2018-03-13T07:14:03.988+1100	INFO	template/load.go:73	Template already exists and will not be overwritten.
2018-03-13T07:14:03.989+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:708	HEAD http://kib02:9200/_template/filebeat-6.2.2  <nil>
2018-03-13T07:14:03.990+1100	INFO	template/load.go:73	Template already exists and will not be overwritten.
2018-03-13T07:14:03.993+1100	DEBUG	[elasticsearch]	elasticsearch/client.go:303	PublishEvents: 4 events have been  published to elasticsearch in 4.040964ms.

From the log I would say it's working correctly. Perhaps you are checking the wrong index in Kibana? Filebeat stores logs in the filebeat-* index

Yes it sure looks like it but I get nothing when I point filebeat to elasticsearch direct.

If I point filebeat to logstash the filebeat* index is created as soon as interesting data is ingested.

:frowning:

I have been playing with this stack for almost 2 yrs now and this is an 'unusual' one...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.