Hi! We are installing and testing the ELK stack in a test env in out company.
I am working towards migrating rsyslog logs from Graylog to ELK.
But even when some fields are mapped as long in elasticsearch when i go to Kibana Discovery they are shown as string.
Here is my logstash config:
input {
tcp {
port => 5141
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => '<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{NOTSPACE:servicio}\_%{NOTSPACE:apache_tipo_log}: %{GREEDYDATA:syslog_message}' }
}
}
grok {
match => { "syslog_message" => '%{IPORHOST:clientip} %{HTTPDUSER:apache_httpuser} %{USER:apache_user} \[%{HTTPDATE:timestamp_apache}\] "(?:%{WORD:HTTP_method} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:rawrequest})" %{NUMBER:response_code} (?:%{NUMBER:bytes:int}|-) %{NUMBER:response_time_sec:int}\/%{NUMBER:response_time_us:int}' }
}
mutate {
convert => {
"bytes" => "integer"
"response_time_sec" => "integer"
"response_time_us" => "integer"
}
}
}
#output {
# stdout { codec => rubydebug }
#}
output {
elasticsearch {
hosts => [ "https://ELASTICSERVER:9200" ]
user => "USER"
password => "USERPASS"
ssl => true
cacert => "CERLOCATION"
manage_template => false
index => "syslog-%{+YYYY.MM.dd}"
}
}
At first the info was parsed without any forced type but i changed it as you can see (about 3 days ago)
I know that if an index have a field mapped as certain type and you change the data to be created as another type then you have to wait until a new index is created or delete the current index and create the new one. With that being said my config creates a new index each day.
So, even when the data is being created as long i still see the data in discovery as string type.
This is the json of the last created index as seen from the kibana config -> elasticsearch -> index management -> mappings
"response_time_sec": {
"type": "long"
},
"response_time_us": {
"type": "long"
},
What am i doing wrong?