Hello,
I am running the ELK-stack on a server and want to parse the Bluecoat Proxy logs.
I have created a grok filter and it works when i test it in the Grok Debugger (http://grokdebug.herokuapp.com/) but when i add the filter in the logstash.json and run it it doesnt create the fields in kibana for me.
Hi,
No, doesnt seem like it. This was the config i had in filebeat.yml
output.elasticsearch:
Array of hosts to connect to.
hosts: ["localhost:9200"]
#output.logstash:
The Logstash hosts
#hosts: ["localhost:5044"]
I switched that around, so the output.logstash is now Active (and not the output.elasticsearch) but now new files doesnt arrive into kibana. Do i need to configure any special input in the logstash config?
Looks like this at the moment:
input {
beats {
port => 5044
type => "log"
}
}
But, the information in the output: index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
Seems to have been applied when i look at events in Kibana, _Index have the value "filebeat-2017.08.31" for example.
Logstash has no idea of whether the files Filebeat should read are new or old, so if old data arrives to Logstash and Elasticsearch as expected but new files aren't picked up then you have a Filebeat problem.
I solved this, updated the beat agent config, and also uploaded the beat template to elasticsearch which i missed. I am new to this but trying to convince my workplace not to buy splunk if we can solve our needs with the ELK-stack.
Alot of help in the Community and i appreciate that i get assistance when i get stuck.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.