How to read json file using filebeat and send it to elasticsearch

How to read json file using filebeat and send it to elasticsearch.

You can follow the Filebeat getting started guide to get Filebeat shipping the logs Elasticsearch. The only special thing you need to do is add the json configuration to the proscpector config so that Filebeat parses the JSON before sending it.

filebeat.prospectors:
- paths:
  - /var/log/mylog.json
  json.keys_under_root: true
  json.add_error_key: true

I am able to send json file to elasticsearch and visualize in kibana. But i am not getting contents from json file.

After adding below lines, i am not able to start filebeat service.

  • /var/log/mylog.json
    json.keys_under_root: true
    json.add_error_key: true

I want to parse the contents of json file and visualize the same in kibana.

Contents of Json:-

{"application":{"id":"d6a19b39-5e40-4cbb-9857-561df066be5b","securityResourceId":"a06e9992-be1f-4355-bb37-69331acf19be","name":"hello Application","description":"","created":1490177768938,"enforceCompleteSnapshots":false,"active":true,"tags":[],"deleted":false,"user":"admin"},"applicationProcess":{"id":"776c883b-7281-43ed-93eb-9dce669675a6","name":"hello App Process","description":"","active":true,"inventoryManagementType":"AUTOMATIC","offlineAgentHandling":"PRE_EXECUTION_CHECK","versionCount":2,"version":2,"commit":54,"path":"applications\/d6a19b39-5e40-4cbb-9857-561df066be5b\/processes\/776c883b-7281-43ed-93eb-9dce669675a6","deleted":false,"metadataType":"applicationProcess"},"environment":{"id":"6741e9d2-8e9a-4f80-b067-16eb96121149","securityResourceId":"13b02668-ad35-4031-aaf5-1dc82a4de72d","name":"helloDeploy","description":"","color":"#00B2EF","requireApprovals":false,"noSelfApprovals":false,"lockSnapshots":false,"calendarId":"d92cabe9-f96b-4637-9d62-d15bb26ec6c8","active":true,"deleted":false,"cleanupDaysToKeep":0,"cleanupCountToKeep":0,"enableProcessHistoryCleanup":false,"useSystemDefaultDays":true,"historyCleanupDaysToKeep":365,"conditions":[]},"id":"0ba25d74-8b9d-4fe9-8d80-f11feeec7ddf","submittedTime":1490246254582,"traceId":"bab5a5bc-4c87-4b9a-9481-fdffb9478384","userName":"admin","onlyChanged":true,"description":"","startTime":1490246255525,"result":"FAULTED","state":"CLOSED","paused":false,"endTime":1490246267119,"duration":11594},
{"application":{"id":"d6a19b39-5e40-4cbb-9857-561df066be5b","securityResourceId":"a06e9992-be1f-4355-bb37-69331acf19be","name":"hello Application","description":"","created":1490177768938,"enforceCompleteSnapshots":false,"active":true,"tags":[],"deleted":false,"user":"admin"},"applicationProcess":{"id":"776c883b-7281-43ed-93eb-9dce669675a6","name":"hello App Process","description":"","active":true,"inventoryManagementType":"AUTOMATIC","offlineAgentHandling":"PRE_EXECUTION_CHECK","versionCount":2,"version":2,"commit":54,"path":"applications\/d6a19b39-5e40-4cbb-9857-561df066be5b\/processes\/776c883b-7281-43ed-93eb-9dce669675a6","deleted":false,"metadataType":"applicationProcess"},"environment":{"id":"6741e9d2-8e9a-4f80-b067-16eb96121149","securityResourceId":"13b02668-ad35-4031-aaf5-1dc82a4de72d","name":"helloDeploy","description":"","color":"#00B2EF","requireApprovals":false,"noSelfApprovals":false,"lockSnapshots":false,"calendarId":"d92cabe9-f96b-4637-9d62-d15bb26ec6c8","active":true,"deleted":false,"cleanupDaysToKeep":0,"cleanupCountToKeep":0,"enableProcessHistoryCleanup":false,"useSystemDefaultDays":true,"historyCleanupDaysToKeep":365,"conditions":[]},"id":"97aa4b10-9422-42f7-849a-853c86e8a0d8","submittedTime":1490247166486,"traceId":"960cc7c8-fa7b-4077-b173-d3e97e632abe","userName":"admin","onlyChanged":true,"description":"","startTime":1490247167163,"result":"FAULTED","state":"CLOSED","paused":false,"endTime":1490247178590,"duration":11427},
{"application":{"id":"d6a19b39-5e40-4cbb-9857-561df066be5b","securityResourceId":"a06e9992-be1f-4355-bb37-69331acf19be","name":"hello Application","description":"","created":1490177768938,"enforceCompleteSnapshots":false,"active":true,"tags":[],"deleted":false,"user":"admin"},"applicationProcess":{"id":"776c883b-7281-43ed-93eb-9dce669675a6","name":"hello App Process","description":"","active":true,"inventoryManagementType":"AUTOMATIC","offlineAgentHandling":"PRE_EXECUTION_CHECK","versionCount":2,"version":2,"commit":54,"path":"applications\/d6a19b39-5e40-4cbb-9857-561df066be5b\/processes\/776c883b-7281-43ed-93eb-9dce669675a6","deleted":false,"metadataType":"applicationProcess"},"environment":{"id":"6741e9d2-8e9a-4f80-b067-16eb96121149","securityResourceId":"13b02668-ad35-4031-aaf5-1dc82a4de72d","name":"helloDeploy","description":"","color":"#00B2EF","requireApprovals":false,"noSelfApprovals":false,"lockSnapshots":false,"calendarId":"d92cabe9-f96b-4637-9d62-d15bb26ec6c8","active":true,"deleted":false,"cleanupDaysToKeep":0,"cleanupCountToKeep":0,"enableProcessHistoryCleanup":false,"useSystemDefaultDays":true,"historyCleanupDaysToKeep":365,"conditions":[]},"id":"b49ac86b-531a-48db-8cb5-2744a0a72126","submittedTime":1490247308405,"traceId":"6a9cb4b0-b385-4a14-b5f4-33778650bd3c","userName":"admin","onlyChanged":true,"description":"","startTime":1490247309105,"result":"SUCCEEDED","state":"CLOSED","paused":false,"endTime":1490247319644,"duration":10539},
{"application":{"id":"d6a19b39-5e40-4cbb-9857-561df066be5b","securityResourceId":"a06e9992-be1f-4355-bb37-69331acf19be","name":"hello Application","description":"","created":1490177768938,"enforceCompleteSnapshots":false,"active":true,"tags":[],"deleted":false,"user":"admin"},"applicationProcess":{"id":"776c883b-7281-43ed-93eb-9dce669675a6","name":"hello App Process","description":"","active":true,"inventoryManagementType":"AUTOMATIC","offlineAgentHandling":"PRE_EXECUTION_CHECK","versionCount":2,"version":2,"commit":54,"path":"applications\/d6a19b39-5e40-4cbb-9857-561df066be5b\/processes\/776c883b-7281-43ed-93eb-9dce669675a6","deleted":false,"metadataType":"applicationProcess"},"environment":{"id":"6741e9d2-8e9a-4f80-b067-16eb96121149","securityResourceId":"13b02668-ad35-4031-aaf5-1dc82a4de72d","name":"helloDeploy","description":"","color":"#00B2EF","requireApprovals":false,"noSelfApprovals":false,"lockSnapshots":false,"calendarId":"d92cabe9-f96b-4637-9d62-d15bb26ec6c8","active":true,"deleted":false,"cleanupDaysToKeep":0,"cleanupCountToKeep":0,"enableProcessHistoryCleanup":false,"useSystemDefaultDays":true,"historyCleanupDaysToKeep":365,"conditions":[]},"id":"915d0f0b-b7ac-4932-b541-2c07b1467997","submittedTime":1490247735457,"traceId":"fbe35578-5185-4370-b975-71e9f29ff45d","userName":"admin","onlyChanged":true,"description":"","startTime":1490247735871,"result":"SUCCEEDED","state":"CLOSED","paused":false,"endTime":1490247736008,"duration":137},

That sounds like a config issue. Please share your complete config file. And also the result of running sudo filebeat.sh -e -configtest.

Please find result of configtest as below :

2017/07/05 15:45:52.194188 beat.go:285: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017/07/05 15:45:52.194211 beat.go:186: INFO Setup Beat: filebeat; Version: 5.4.3
2017/07/05 15:45:52.194291 logstash.go:90: INFO Max Retries set to: 3
2017/07/05 15:45:52.194344 outputs.go:108: INFO Activated logstash as output plugin.
2017/07/05 15:45:52.194383 metrics.go:23: INFO Metrics logging every 30s
2017/07/05 15:45:52.194399 publish.go:295: INFO Publisher name: ip-192-168-1-61.ec2.internal
2017/07/05 15:45:52.194536 async.go:63: INFO Flush Interval set to: 1s
2017/07/05 15:45:52.194547 async.go:64: INFO Max Bulk Size set to: 1024
Config OK

Also find my configuration file :

filebeat:
  prospectors:
    -
      paths:
        - /var/log/*.json
      #  - /var/log/messages
      #  - /var/log/*.log

      input_type: log

      document_type: syslog

  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["192.168.1.61:5044"]
    bulk_max_size: 1024

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

Please suggest.

Is that the config where the service starts normally? That config does not include the JSON config options.

When the Filebeat service fails to start, what's in the log file?

Did you migrate from Filebeat 1.x? The config looks like it's from 1.x. Some of the options have been renamed. Most importantly, tls was changed to ssl to be consistent with other Elastic projects. How about using this below as your filebeat.yml and see if it works.

filebeat.prospectors:
- paths:
  - /var/log/*.json
  document_type: syslog
  json.keys_under_root: true
  json.add_error_key: true

output.logstash:
  hosts: ["192.168.1.61:5044"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Hi Andrew,

If i edit filebeat.yml file with the given configuration, it gives error when i start filebeat service. Here is status result after starting the filebeat service:
● filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2017-07-05 23:36:43 EDT; 11s ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 22754 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)
Main PID: 22754 (code=exited, status=1/FAILURE)

Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service failed.
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service holdoff time over, scheduling restart.
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: start request repeated too quickly for filebeat.service
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: Failed to start filebeat.
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 05 23:36:43 ip-192-168-1-61.ec2.internal systemd[1]:

Current version of filebeat is :-
[root@ip-192-168-1-61 bin]# ./filebeat -version
filebeat version 5.4.3 (amd64), libbeat 5.4.3

We can skip tls/ssl as of now because i have installed filebeat on same server on which ELK is setup.

The log file should be located in /var/log/filebeat/, can you post what's in the file.

Hi,

Please find the attached file.

Thanks & Regards,
Aditya Balhara

Filebeat last few lines :
2017-07-06T12:54:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:54:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:54:47-04:00 INFO File is inactive: /var/log/Deployment_summary.json. Closing because close_inactive of 5m0s reached.
2017-07-06T12:54:52-04:00 INFO Harvester started for file: /var/log/Deployment_summary.json
2017-07-06T12:55:01-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=1 filebeat.harvester.started=1 publish.events=2 registrar.states.update=2 registrar.writes=2
2017-07-06T12:55:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:56:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:56:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:57:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:57:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:58:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:58:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:59:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:59:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T12:59:57-04:00 INFO File is inactive: /var/log/Deployment_summary.json. Closing because close_inactive of 5m0s reached.
2017-07-06T13:00:01-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=1 filebeat.harvester.open_files=-1 filebeat.harvester.running=-1 publish.events=1 registrar.states.update=1 registrar.writes=1
2017-07-06T13:00:02-04:00 INFO Harvester started for file: /var/log/Deployment_summary.json
2017-07-06T13:00:31-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 publish.events=1 registrar.states.update=1 registrar.writes=1
2017-07-06T13:01:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:01:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:02:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:02:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:03:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:03:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:04:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:04:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:05:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:05:07-04:00 INFO File is inactive: /var/log/Deployment_summary.json. Closing because close_inactive of 5m0s reached.
2017-07-06T13:05:12-04:00 INFO Harvester started for file: /var/log/Deployment_summary.json
2017-07-06T13:05:31-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=1 filebeat.harvester.started=1 publish.events=2 registrar.states.update=2 registrar.writes=2
2017-07-06T13:06:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:06:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:07:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:07:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:08:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:08:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:09:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:09:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:10:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:10:18-04:00 INFO File is inactive: /var/log/Deployment_summary.json. Closing because close_inactive of 5m0s reached.
2017-07-06T13:10:23-04:00 INFO Harvester started for file: /var/log/Deployment_summary.json
2017-07-06T13:10:31-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=1 filebeat.harvester.started=1 publish.events=2 registrar.states.update=2 registrar.writes=2
2017-07-06T13:11:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:11:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:12:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:12:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:13:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:13:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:14:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:14:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:15:01-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:15:28-04:00 INFO File is inactive: /var/log/Deployment_summary.json. Closing because close_inactive of 5m0s reached.
2017-07-06T13:15:31-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.closed=1 filebeat.harvester.open_files=-1 filebeat.harvester.running=-1 publish.events=1 registrar.states.update=1 registrar.writes=1
2017-07-06T13:15:33-04:00 INFO Harvester started for file: /var/log/Deployment_summary.json
2017-07-06T13:16:01-04:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 publish.events=1 registrar.states.update=1 registrar.writes=1
2017-07-06T13:16:31-04:00 INFO No non-zero metrics in the last 30s
2017-07-06T13:16:44-04:00 INFO Stopping filebeat
2017-07-06T13:16:44-04:00 INFO Prospector channel stopped because beat is stopping.
2017-07-06T13:16:44-04:00 INFO Stopping Crawler
2017-07-06T13:16:44-04:00 INFO Stopping 1 prospectors
2017-07-06T13:16:44-04:00 INFO Prospector ticker stopped
2017-07-06T13:16:44-04:00 INFO Stopping Prospector: 806989121561951862
2017-07-06T13:16:44-04:00 INFO Reader was closed: /var/log/Deployment_summary.json. Closing.
2017-07-06T13:16:44-04:00 INFO Crawler stopped
2017-07-06T13:16:44-04:00 INFO Stopping spooler
2017-07-06T13:16:44-04:00 INFO Stopping Registrar
2017-07-06T13:16:44-04:00 INFO Ending Registrar
2017-07-06T13:16:44-04:00 INFO Total non-zero values: filebeat.harvester.closed=142 filebeat.harvester.started=142 publish.events=287 registrar.states.current=11 registrar.states.update=287 registrar.writes=284
2017-07-06T13:16:44-04:00 INFO Uptime: 12h9m42.415732288s
2017-07-06T13:16:44-04:00 INFO filebeat stopped.

The log file indicates that Filebeat ran for 12 hours and stopped normally. That seems strange if the service is really not starting. I'm not sure what's going on. How about running through the commands below and sharing the output and maybe one of us can spot an issue (you might need to use https://pastebin.com to share the output since it will be long).

uname -a
sudo filebeat.sh -e -d "*" -configtest
sudo cat /etc/filebeat/filebeat.yml
ps -ef | grep filebeat
sudo systemctl status filebeat.service
sudo systemctl stop filebeat.service
# Clear the registry so that it resends all log lines from the beginning.
sudo rm /var/lib/filebeat/registry
sudo filebeat.sh -e
# Let it run for a about a minute. Then Ctrl+C.
sudo systemctl start filebeat.service
sudo systemctl status filebeat.service
tail -100 /var/log/filebeat/filebeat

Please find the configuration Test results :
[root@ip-192-168-1-61 bin]# uname -a
Linux ip-192-168-1-61.ec2.internal 3.10.0-514.21.2.el7.x86_64 #1 SMP Sun May 28 17:08:21 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@ip-192-168-1-61 bin]# sudo filebeat.sh -e -d "" -configtest
filebeat2017/07/09 10:26:04.378843 beat.go:339: CRIT Exiting: error loading config file: yaml: line 11: found a tab character that violate indentation
Exiting: error loading config file: yaml: line 11: found a tab character that violate indentation
[root@ip-192-168-1-61 bin]# sudo cat /etc/filebeat/filebeat.yml
filebeat:
prospectors:
-
paths:
- /var/log/
.json
# - /var/log/messages
# - /var/log/*.log

  input_type: log

  document_type: syslog
    json.keys_under_root: true
    json.add_error_key: true

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["192.168.1.61:5044"]
bulk_max_size: 1024

tls:
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB
[root@ip-192-168-1-61 bin]# ps -ef | grep filebeat
root 20029 18810 0 06:26 pts/0 00:00:00 grep --color=auto filebeat
[root@ip-192-168-1-61 bin]# sudo systemctl status filebeat.service
● filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sun 2017-07-09 06:25:47 EDT; 1min 2s ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 19867 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)
Main PID: 19867 (code=exited, status=1/FAILURE)

Jul 09 06:25:46 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Jul 09 06:25:46 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 09 06:25:46 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service failed.
Jul 09 06:25:47 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service holdoff time over, scheduling restart.
Jul 09 06:25:47 ip-192-168-1-61.ec2.internal systemd[1]: start request repeated too quickly for filebeat.service
Jul 09 06:25:47 ip-192-168-1-61.ec2.internal systemd[1]: Failed to start filebeat.
Jul 09 06:25:47 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 09 06:25:47 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service failed.
[root@ip-192-168-1-61 bin]# sudo systemctl stop filebeat.service
[root@ip-192-168-1-61 bin]# sudo rm /var/lib/filebeat/registry
[root@ip-192-168-1-61 bin]# sudo filebeat.sh -e
filebeat2017/07/09 10:27:27.196040 beat.go:339: CRIT Exiting: error loading config file: yaml: line 11: found a tab character that violate indentation
Exiting: error loading config file: yaml: line 11: found a tab character that violate indentation
[root@ip-192-168-1-61 bin]# sudo systemctl start filebeat.service
[root@ip-192-168-1-61 bin]# sudo systemctl status filebeat.service
● filebeat.service - filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Sun 2017-07-09 06:27:39 EDT; 4s ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 20220 ExecStart=/usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat (code=exited, status=1/FAILURE)
Main PID: 20220 (code=exited, status=1/FAILURE)

Jul 09 06:27:38 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Jul 09 06:27:38 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 09 06:27:38 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service failed.
Jul 09 06:27:39 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service holdoff time over, scheduling restart.
Jul 09 06:27:39 ip-192-168-1-61.ec2.internal systemd[1]: start request repeated too quickly for filebeat.service
Jul 09 06:27:39 ip-192-168-1-61.ec2.internal systemd[1]: Failed to start filebeat.
Jul 09 06:27:39 ip-192-168-1-61.ec2.internal systemd[1]: Unit filebeat.service entered failed state.
Jul 09 06:27:39 ip-192-168-1-61.ec2.internal systemd[1]: filebeat.service failed.
[root@ip-192-168-1-61 bin]# tail -100 /var/log/filebeat/filebeat
2017-07-09T06:22:58-04:00 INFO Setup Beat: filebeat; Version: 5.4.3
2017-07-09T06:22:58-04:00 INFO Max Retries set to: 3
2017-07-09T06:22:58-04:00 INFO Activated logstash as output plugin.
2017-07-09T06:22:58-04:00 INFO Publisher name: ip-192-168-1-61.ec2.internal
2017-07-09T06:22:58-04:00 INFO Flush Interval set to: 1s
2017-07-09T06:22:58-04:00 INFO Max Bulk Size set to: 1024
[root@ip-192-168-1-61 bin]#

HI,

Also kept results over below link :
https://pastebin.com/mnQqpf03

Thanks in advance :slight_smile:

Hi Andrew,

Any update on same ?

The logs you posted say

Exiting: error loading config file: yaml: line 11: found a tab character that violate indentation

You need to remove all tab characters from the config file. YAML does not allow tabs. http://yaml.org/faq.html

When i start filebeat service without below json tags, filebeat service is working fine. ANd if i include the same in my filebeat.yml then filebeat service doesn't start.

json.keys_under_root: true
json.add_error_key: true

I am not using the tag now. But i have tried, tags is not the issue.

The problem is tabs characters in your config file, not tags. See my previous post.

Sorry it was written mistake, it is tab instead of tag.

filebeat service starts even when i am using the tabs but not including below codes.
json.keys_under_root: true
json.add_error_key: true

Hi Andrew,

Any update on same ?

I have also removed tabs from the configuration file but not working. Please suggest asap.

If it is still now working, then what are the errors now? Please re-run these commands and share the output.

sudo filebeat.sh -e -d "*" -configtest
sudo cat /etc/filebeat/filebeat.yml
sudo filebeat.sh -e
# Let it run for a about a minute. Then Ctrl+C.
tail -100 /var/log/filebeat/filebeat