I assume that you have a log on FB side which sends data to LS. By default, all data which has been arrived to LS will be in the message field, as text. Inside LS, you will do conversion to JSON fields, which you are set in the code.
I assume that you want to write different log in different index.You can use "fields.name" in FB which you will have in LS. For instance, fields.log: access and fields.log: cloud in filebeat.yml
Your code would be:
input {
beats {
port => "5044"
}
}
filter {
if [fields][log]=="access" {
mutate {
remove_field => "message"
}
}
else if [fields][log]=="cloud" {
json { }
mutate {
remove_field => [ "ClientIP","ResponseBytes", "fields" ] # remove "fields" before Elasticsearch
}
}
}
output {
if [fields][log]=="access" {
elasticsearch {
hosts => ["localhost:9200"]
index => "access-%{+YYYY.MM.dd}"
}
}
else if [fields][log]=='cloud' {
elasticsearch {
hosts => ["localhost:9200"]
index => "cloud-%{+YYYY.MM.dd}"
}
}
}
Another option to check how logs start, or containing a keyword, with regex you can check does a line starts with date in format yyyy-MM-dd: if ([message] =~ /^(\d{4,})-(\d{2})-(\d{2})$/ ) { ...
This is useful for other streams like syslog.
Do whatever is easier for you.
Thank you, Rios. I have two types on the input side of Filebeat, text & Json. I will push both of them to Logstash port 5044. Can I do that?
Here is my input part of the filebeat.yml:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.