Use logstash or filebeat for sending azure JSON logs?

Hello, sorry for silly question but I was trying about a week to adjust logstash 5.2 config for sending azure JSON logs to elastic 5.2, but all my attempts were failed. Logstash throws many different exceptions and errors that config is wrong and JSON files can not be parsed, although I am using logstash-input-azureblob module installed, which can get this files from the blob, but can not parse them and send to elastic by some unclear reason.
Yesterday an article about Filebeat caught my eye. But I am not sure that Filebeat 5.2 can parse JSON bunch of log files and ship them to Elastic - Kibana properly without any conversion so as I can see only *.log extension in Filebeat.yml unfortunately.
So, the question is what product should I choose to make it work with minimal handy changes in config files so as I have no time for it anymore and relevant knowledge how to do it? Please advise.

Have a look at this blog post here about JSON logs https://www.elastic.co/blog/structured-logging-filebeat and the filebeat docs: https://www.elastic.co/guide/en/beats/filebeat/5.2/configuration-filebeat-options.html#config-json

2017-02-10T12:15:37Z INFO Home path: [C:\filebeat-5.2.0-windows-x86_64] Config path: [C:\filebeat-5.2.0-windows-x86_64] Data path: [C:\ProgramData\filebeat] Logs path: [C:\filebeat-5.2.0-windows-x86_64\logs]
2017-02-10T12:15:37Z INFO Setup Beat: filebeat; Version: 5.2.0
2017-02-10T12:15:37Z INFO Loading template enabled. Reading template file: C:\filebeat-5.2.0-windows-x86_64\filebeat.template.json
2017-02-10T12:15:37Z INFO Loading template enabled for Elasticsearch 2.x. Reading template file: C:\filebeat-5.2.0-windows-x86_64\filebeat.template-es2x.json
2017-02-10T12:15:37Z INFO Elasticsearch url: http://localhost:9200
2017-02-10T12:15:37Z INFO Activated elasticsearch as output plugin.
2017-02-10T12:15:37Z INFO Publisher name: data-vm0
2017-02-10T12:15:37Z INFO Flush Interval set to: 1s
2017-02-10T12:15:37Z INFO Max Bulk Size set to: 50
2017-02-10T12:15:37Z INFO filebeat start running.
2017-02-10T12:15:37Z INFO Registry file set to: C:\ProgramData\filebeat\registry
2017-02-10T12:15:37Z INFO Loading registrar data from C:\ProgramData\filebeat\registry
2017-02-10T12:15:37Z INFO States Loaded from registrar: 0
2017-02-10T12:15:37Z INFO Loading Prospectors: 1
2017-02-10T12:15:37Z INFO Prospector with previous states loaded: 0
2017-02-10T12:15:37Z INFO Loading Prospectors completed. Number of prospectors: 1
2017-02-10T12:15:37Z INFO All prospectors are initialised and running with 0 states to persist
2017-02-10T12:15:37Z INFO Starting Registrar
2017-02-10T12:15:37Z INFO Start sending events to output
2017-02-10T12:15:37Z INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-02-10T12:15:37Z INFO Starting prospector of type: log
2017-02-10T12:16:07Z INFO No non-zero metrics in the last 30s

I have adjusted filebat.yml and filebeat.full.yml files with settings for json files, but nothing is happening.
Still can not see any data in Kibana. http://rgho.st/6W2FwRtMg

Please advise why does filebeat skip directory with json files inside? It should process them but not skip.

PS C:\filebeat-5.2.0-windows-x86_64> .\filebeat.exe -c filebeat.yml -e -v -d "*"
2017/02/10 13:46:17.053776 beat.go:267: INFO Home path: [C:\filebeat-5.2.0-windows-x86_64] Config path: [C:\filebeat-5.2.0-windows-x86_64] Data path: [C:\filebeat-5.2.0-windows-x86_64\data] Logs path: [C:\filebeat-5.2.0-windows-x86_64\logs]
2017/02/10 13:46:17.053776 logp.go:219: INFO Metrics logging every 30s
2017/02/10 13:46:17.053776 beat.go:177: INFO Setup Beat: filebeat; Version: 5.2.0
2017/02/10 13:46:17.053776 processor.go:43: DBG Processors:
2017/02/10 13:46:17.053776 beat.go:183: DBG Initializing output plugins
2017/02/10 13:46:17.053776 output.go:167: INFO Loading template enabled. Reading template file: C:\filebeat-5.2.0-windows-x86_64\filebeat.template.json
2017/02/10 13:46:17.054776 output.go:178: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: C:\filebeat-5.2.0-windows-x86_64\filebeat.template-es2x.json
2017/02/10 13:46:17.054776 client.go:120: INFO Elasticsearch url: http://localhost:9200
2017/02/10 13:46:17.054776 outputs.go:106: INFO Activated elasticsearch as output plugin.
2017/02/10 13:46:17.054776 publish.go:234: DBG Create output worker
2017/02/10 13:46:17.054776 publish.go:276: DBG No output is defined to store the topology. The server fields might not be filled.
2017/02/10 13:46:17.055775 publish.go:291: INFO Publisher name: data-vm0
2017/02/10 13:46:17.057801 async.go:63: INFO Flush Interval set to: 1s
2017/02/10 13:46:17.057801 async.go:64: INFO Max Bulk Size set to: 50
2017/02/10 13:46:17.057801 async.go:72: DBG create bulk processing worker (interval=1s, bulk size=50)
2017/02/10 13:46:17.057801 beat.go:207: INFO filebeat start running.
2017/02/10 13:46:17.057801 service_windows.go:51: DBG Windows is interactive: true
2017/02/10 13:46:17.057801 registrar.go:85: INFO Registry file set to: C:\filebeat-5.2.0-windows-x86_64\data\registry
2017/02/10 13:46:17.058777 registrar.go:106: INFO Loading registrar data from C:\filebeat-5.2.0-windows-x86_64\data\registry
2017/02/10 13:46:17.058777 registrar.go:123: INFO States Loaded from registrar: 0
2017/02/10 13:46:17.058777 registrar.go:236: INFO Starting Registrar
2017/02/10 13:46:17.058777 crawler.go:34: INFO Loading Prospectors: 1
2017/02/10 13:46:17.058777 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017/02/10 13:46:17.058777 sync.go:41: INFO Start sending events to output
2017/02/10 13:46:17.058777 prospector_log.go:41: DBG exclude_files: []
2017/02/10 13:46:17.058777 prospector_log.go:57: INFO Prospector with previous states loaded: 0
2017/02/10 13:46:17.058777 prospector.go:70: DBG File Configs: [G:\azurelogs\name=default\resourceId\y=2017\m=02\d=07\h=14\m=00]
2017/02/10 13:46:17.059776 crawler.go:48: INFO Loading Prospectors completed. Number of prospectors: 1
2017/02/10 13:46:17.059776 crawler.go:63: INFO All prospectors are initialised and running with 0 states to persist
2017/02/10 13:46:17.059776 crawler.go:58: DBG Starting prospector 0
2017/02/10 13:46:17.059776 prospector.go:112: INFO Starting prospector of type: log
2017/02/10 13:46:17.059776 prospector_log.go:62: DBG Start next scan
2017/02/10 13:46:17.059776 prospector_log.go:140: DBG Skipping directory: G:\azurelogs\name=default\resourceId\y=2017\m=02\d=07\h=14\m=00
2017/02/10 13:46:17.059776 prospector_log.go:83: DBG Prospector states cleaned up. Before: 0, After: 0
2017/02/10 13:46:20.183487 service.go:32: DBG Received sigterm/sigint, stopping
2017/02/10 13:46:20.183487 filebeat.go:168: INFO Stopping filebeat
2017/02/10 13:46:20.183487 crawler.go:69: INFO Stopping Crawler
2017/02/10 13:46:20.183487 crawler.go:75: INFO Stopping 1 prospectors
2017/02/10 13:46:20.183487 prospector.go:187: INFO Stopping Prospector
2017/02/10 13:46:20.183487 prospector.go:129: INFO Prospector channel stopped
2017/02/10 13:46:20.183487 service.go:38: DBG Received svc stop/shutdown request
2017/02/10 13:46:20.183487 prospector.go:153: INFO Prospector ticker stopped
2017/02/10 13:46:20.184487 crawler.go:56: DBG Prospector 0 stopped
2017/02/10 13:46:20.184487 crawler.go:82: INFO Crawler stopped
2017/02/10 13:46:20.184487 spooler.go:101: INFO Stopping spooler
2017/02/10 13:46:20.184487 spooler.go:109: DBG Spooler has stopped
2017/02/10 13:46:20.184487 sync.go:47: DBG Shutting down sync publisher
2017/02/10 13:46:20.184487 registrar.go:291: INFO Stopping Registrar
2017/02/10 13:46:20.184487 registrar.go:248: INFO Ending Registrar
2017/02/10 13:46:20.184487 registrar.go:298: DBG Write registry file: C:\filebeat-5.2.0-windows-x86_64\data\registry
2017/02/10 13:46:20.185488 registrar.go:323: DBG Registry file updated. 0 states written.
2017/02/10 13:46:20.185488 logp.go:245: INFO Total non-zero values: registrar.writes=1
2017/02/10 13:46:20.185488 logp.go:246: INFO Uptime: 3.1346887s
2017/02/10 13:46:20.185488 beat.go:211: INFO filebeat stopped.
PS C:\filebeat-5.2.0-windows-x86_64>
PS C:\filebeat-5.2.0-windows-x86_64>

Looks like i have found an issue.
You filebeat can read only absolute path, but not relative with * symbols.
And as it turned out wildcards do not work for Windows for directories.

I have added path to the file manually and got error of processing file:

2017/02/10 14:05:29.170341 client.go:432: WARN Can not index event (status=404): {"type":"type_missing_exception","reason":"type[log] missing","index_uuid":"F4-Ca_YEQDGkjPZOUFdfnQ","index":"filebeat-2017.02.10","caused_by":{"type":"illegal_state_exception","reason":"trying to auto create mapping,
dynamic mapping is disabled"}}

Hello, do you have any solution for this?

It seems something got messed up with your index while you were playing around with the different options. I would recommend you to drop the index on the elasticsearch side again and restart filebeat.

But elastic search creates these indexes automatically and there are only indexes for ELK stack, but not for the Filebeat and Logstash as far as i know.

What does your ES config look like? Reasons I ask because the error dynamic mapping is disabled.

Logstash config:

input
{
    azureblob
    {
        storage_account_name => "someaccount"
        storage_access_key => "somekey"
        container => "somecontainer"
    }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
	index => "logstash-%{+YYYY.MM.dd}"
	user => logstash_system
    password => somepassword
  }
}

ES config:

cluster.name: ELK

node.name: node3

network.host: ["_local_", "_site_"]

discovery.zen.ping.unicast.hosts: ["someip", "someanotherip"]

path.data: ["e:/data", "f:/data"]

You posted your LS config above. Do you want to send the events through LS or directly to ES?

Is the above the full ES config?

Hello,
I have posted both config files.
First one is logstash, second is elastic.

Have short update.
I have removed x-pack and seems now Logstash is working.
But the question is how to tune (create properly) indices so as now there is a mess in displaying of the data.
http://rgho.st/6Lb7NjMsK (json parsing failure)

Are you still using Filebeat or now only Logstash? If only Logstash, I suggest to best ask the question in the Logstash forum.

Logstash only.
Thanks.

Looks like logstash forum has no answer as well on this question. what a shame.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.