Hi, i have a java application with logback for the log configuration and i want to parse my application log files, so they become more useful and to send them to ES, log files will be stored in a directory called logs and this is the log entry format used with logback:
I need to know the simplest way of doing this task (including the configuration file to use), i thought, i can do that using filebeat but filebeat cannot parse log entries.
You can use Filebeat to ship the data to Logstash where you can apply a grok filter to parse the log line. Then ship the data to Elasticsearch from Logstash. The Beats documentation has an example of how to configure Filebeat to send to Logstash.
Actually, i modified that existing configuration file /etc/filebeat/filebeat.yml and restarted filebeat service. That is how to configure Filebeat, right?
* Restarting Sends log files to Logstash or directly to Elasticsearch. filebeat 2016/05/06 13:24:14.630402 beat.go:135: DBG Initializing output plugins
2016/05/06 13:24:14.630485 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/05/06 13:24:14.630603 client.go:265: DBG ES Ping(url=http://localhost:9200, timeout=1m30s)
2016/05/06 13:24:14.646024 client.go:274: DBG Ping status code: 200
2016/05/06 13:24:14.646119 outputs.go:119: INFO Activated elasticsearch as output plugin.
2016/05/06 13:24:14.646172 publish.go:232: DBG Create output worker
2016/05/06 13:24:14.646296 publish.go:274: DBG No output is defined to store the topology. The server fields might not be filled.
2016/05/06 13:24:14.646416 publish.go:288: INFO Publisher name: fathi-HP-Pavilion-g6-Notebook-PC
2016/05/06 13:24:14.647094 async.go:78: INFO Flush Interval set to: 1s
2016/05/06 13:24:14.647162 async.go:84: INFO Max Bulk Size set to: 50
2016/05/06 13:24:14.647213 async.go:92: DBG create bulk processing worker (interval=1s, bulk size=50)
2016/05/06 13:24:14.647400 beat.go:147: INFO Init Beat: filebeat; Version: 1.1.2
[ OK ]
Is there anything in your registry file at /var/lib/filebeat/registry? You should probably delete it before each test to ensure that filebeat re-ships logs that it has already read.
I expect a lot more output in the log file if you are running with level: debug. Is the above output from the log?
I am a newbie to Filebeat and Elastic Stack.
I am also trying to do something similar to you, Jemli.
Where did you specify the grok format expression for parsing the Java log entries?
Likewise, don't we need to specify the transformation to JSON?!
I assume that we need something like:
output:
elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"] template.name: "filebeat"
template.path: "filebeat.template.json"
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.