Hi,
I am using the current versions of logstash, elasticsearch and filebeat.
In my beats configuration I create a custom field:
fields:
type: wildfly
product: lisa
enviroment: test
In logstash I want to use the index in elasticsearch based on the field "product"
output {
elasticsearch {
hosts => ["tint-as57:9200","tint-as58:9200"]
index => "%{[fields][product]}-%{+YYYY.MM.dd}"
}
Unfortunately the index gets created as:
%{[fields][product]}-2015.12.04
I am also trying to filter with the json filter based on fields.type:
filter {
if [fields][type] == "wildfly" {
json {
source => "message"
}
}
}
But this also doesnt work. Apparently I have a problem adressing the fields in the right manner...
Won't type
, product
, and enviroment
end up as top-level fields rather than nested under fields
? The resulting index name indicates that you don't actually have a field with that name.
I deleted now all the indexes which where created during my tests, and also the kibana index.
Now it works, perhaps it was an issue with caching ..
Hi Alexander,
can you please share your logstash confihuration. I am trying to create indexes in same way but not successful.
Br,
Sunil
Hi,
this is my logstash config:
`input {
beats {
port => 5001
}
}
filter {
if [fields][type] == "wildfly" {
json {
source => "message"
}
}
}
output {
elasticsearch {
hosts => ["xxx:9200"]
index => "%{[fields][product]}-%{+YYYY.MM.dd}"
}
# stdout { codec => rubydebug }
}`
And this is the beats config:
'filebeat:
prospectors:
-
paths:
- F:\wildfly-8.2.0.Final\WildFly-HOST\servers\node1\log\logstash.log
- F:\wildfly-8.2.0.Final\WildFly-HOST\servers\node2\log\logstash.log
encoding: utf-8
input_type: log
fields:
type: wildfly
product: lisa
enviroment: test
ignore_older: 120h
document_type: json
scan_frequency: 10s
force_close_files: true
registry_file: "C:/ProgramData/filebeat/registry"
output:
logstash:
hosts: ["xxx.net:5001"]
logging:
to_files: true
files:
name: filebeat.log
rotateeverybytes: 10485760 # = 10MB
keepfiles: 7
level: debug'
1 Like