Elasticsearch create index with wrong fields types

Hi all!
I want to get my haproxy logs and visualize there with Kibana, but i have problem.
I make config for logstash
input {
file {
path => "/var/log/haproxy.log"
start_position => "end"
stat_interval => 1
discover_interval => 30
}
file {
path => "/var/log/haproxy.log.1"
start_position => "end"
stat_interval => 1
discover_interval => 30
}
}
filter {
grok {
patterns_dir => "/etc/logstash/conf.d/patterns"
match => { "message" => "%{HAPROXYHTTP}" }
}
}
output {
elasticsearch {
hosts => ["10.0.20.10:9200"]
index => "haproxy-%{+YYYY.MM.dd}"
}
}

I used the standard haproxy pattern, which specifies the data types, but in Kibana I see that all fields are of the string type. What is my mistake? Why elasticsearch not receive correct field types?
Thx and sorry for my poor english

Logstash doesn't make the field types. Elasticsearch mappings do. Logstash provides a mapping template, but it only works for indices that start with logstash-. You will have to make your own mapping template for your data. You can base it off of the template Logstash provides.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.