Hi,
I changed loglevel to debug and I get this when I run filebeat.
I don't understand why its not sending contents to elasticsearch.
2016-04-19T16:45:49+05:30 DBG Disable stderr logging
2016-04-19T16:45:49+05:30 DBG Initializing output plugins
2016-04-19T16:45:49+05:30 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-04-19T16:45:49+05:30 DBG ES Ping(url=http://localhost:9200, timeout=1m30s)
2016-04-19T16:45:50+05:30 DBG Ping status code: 200
2016-04-19T16:45:50+05:30 INFO Activated elasticsearch as output plugin.
2016-04-19T16:45:50+05:30 DBG Create output worker
2016-04-19T16:45:50+05:30 DBG No output is defined to store the topology. The server fields might not be filled.
2016-04-19T16:45:50+05:30 INFO Publisher name: hostName
2016-04-19T16:45:50+05:30 INFO Flush Interval set to: 1s
2016-04-19T16:45:50+05:30 INFO Max Bulk Size set to: 50
2016-04-19T16:45:50+05:30 DBG create bulk processing worker (interval=1s, bulk size=50)
2016-04-19T16:45:50+05:30 INFO Init Beat: filebeat; Version: 1.2.1
2016-04-19T16:45:50+05:30 INFO filebeat sucessfully setup. Start running.
2016-04-19T16:45:50+05:30 INFO Registry file set to: C:\ProgramData\filebeat\registry
2016-04-19T16:45:50+05:30 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2016-04-19T16:45:50+05:30 DBG Set idleTimeoutDuration to 5s
2016-04-19T16:45:50+05:30 DBG File Configs: [C:\var\log\test.log]
2016-04-19T16:45:50+05:30 INFO Set ignore_older duration to 0
2016-04-19T16:45:50+05:30 INFO Set close_older duration to 1h0m0s
2016-04-19T16:45:50+05:30 INFO Set scan_frequency duration to 10s
2016-04-19T16:45:50+05:30 INFO Input type set to: log
2016-04-19T16:45:50+05:30 INFO Set backoff duration to 1s
2016-04-19T16:45:50+05:30 INFO Set max_backoff duration to 10s
2016-04-19T16:45:50+05:30 INFO force_close_file is disabled
2016-04-19T16:45:50+05:30 DBG Waiting for 1 prospectors to initialise
2016-04-19T16:45:50+05:30 INFO Starting prospector of type: log
2016-04-19T16:45:50+05:30 DBG exclude_files: []
2016-04-19T16:45:50+05:30 DBG scan path C:\var\log\test.log
2016-04-19T16:45:50+05:30 DBG scan path C:\var\log\test.log
2016-04-19T16:45:50+05:30 DBG No pending prospectors. Finishing setup
2016-04-19T16:45:50+05:30 INFO All prospectors initialised with 0 states to persist
2016-04-19T16:45:50+05:30 INFO Starting Registrar
2016-04-19T16:45:50+05:30 INFO Start sending events to output
2016-04-19T16:45:50+05:30 DBG Windows is interactive: true
2016-04-19T16:45:50+05:30 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016-04-19T16:45:52+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2016-04-19T16:46:00+05:30 DBG Start next scan
2016-04-19T16:46:00+05:30 DBG scan path C:\var\log\test.log
2016-04-19T16:46:00+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2016-04-19T16:46:07+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
Hi,
It was my mistake. file name was wrong. Actually windows was not showing the file extension. and hence mistake happened.
Anyways, It started and sending logs ot ES.
Now Ihave one question:
When I stop fb and start it again, it start harvester form the same offset position. Its good feature but , I want to carry out some tests on same lines of logs again and again. What should be done to reset harvestor position each time it starts?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.