Hi, I just ran filebeat with the typical commands of filebeat start within the directory of filebeat.
But when i open Kibana with my configured yml, i dont see anything too it but I see my previous defaulted filebeat outputs within kibana which makes not a ton of sense.
I tried to format your config, but it seems like no indentation exists?
Not sure what you mean by your last question. You mean just print out the output for debugging? Then you can use -e -d "*" flags and all output is printed to stdout.
Ahh okay, ill use that command more frequently as I did not know it existed.
And Yeah I will update you within a couple of hours as I will just redownload the .yml confg from git and c&p the stuff over and try to maintain the consistent indentation and see if that was the main culprit of my problems. And much thanks =]
Hmm I just reused the default configuration document and it seems to be stuck after the start command. I also ran the -e - d "8" command and got these results
Based on the above output it looks like your log file does not get any updates in the 2m you posted the output. Were there any updates to logs in this time?
Please don't use screenshots but paste the code itself which is much easier to read.
But in terms of updated logs, there wasn't any updates but I may have to reset the path as it looks like it may have looked into a place that may not have had the files it should have been looking for. I will run it and see what occurs, when I change the path to another directory.
Hi, I just adjusted the pathing for the logs and it does seem like I may have to change the duration in which it starts to ignore the log. Where within the yml can I do this? and it seems like it was defaulted to 24hrs as the result does say
INFO set ignore-older duration to 24h0m0s and my file is currently at 526h11m17s
Be aware that the behaviour changed between 1.1 and 1.2. Per default in 1.2, ignore_older is set to infinity, also to prevent similar cases you have above. I strong recommend you to update to filebeat 1.2.2.
The ignore_older would also explain why the files were not shipped to elasticsearch.
Oh okay, and thank you for that link and I think I got my beat to work as it read the logs =].
So in order to get the messages to get indexed, I would then have to change the json format? And upgrading to 1.2.2 would it change anything else? and is it compatible with my ELK stack?
the filebeat.template.json as my current filebeat is currently storing everything within properties -> message so what I will attempt currently would be to add more fields in message such as program version. And I was wondering if that would be the current path to take.
Hey ruflin, I was also wondering, as I did change some of the message inputs. But does filebeat basically parse the entire text document depending on what I put into quotations on the left side of the argument?
Such as---> "3DENGINE": "string"
Filebeat does not process the log messages. It just takes line by line and forwards it to Logstash or Elasticsearch. If you need log line processing and extraction, that is what Logstash is for.
The filebeat template has no affect on what filebeat itself does. It is for elasticsearch to know the types of the fields.
the particular include is within filebeat.yml and it was for the
include_lines: ["^ERR", "^WARN" ]
And the paths where the same as the one stated earlier which was just the typical -C:\pwd*.log
And for my question I could have phrased it better, I would wondering within the include_lines: is there a option that I could do something that would automatically read past all the basic information of dates and times like these
Fri Dec 04 10:51:24 EST 2015:
Mon Dec 07 12:16:37 EST 2015:
May 01 17:15:16 EDT 2016:
And the include_lines would read the three dates and look for a word such as ERROR right afterwards like this and punch it out into kibana or logstash
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.