Logstash not populating .raw values

I have filebeat set up to ship my nginx logs over to logstash and logstash is updating my indices with the log data. I am using the logstash-* template (though my index is called logstash-appname-prod-web-access-logs-YYYY.MM.dd.

In Kibana, I can see that there are .raw fields being created but they are not being populated with any values. When i go to build my visualization to show the distribution based on host_name, the aggregation fails because all host.raw values are null.

What am I missing?


What does the mapping of the host (or is it host_name?) property look like in a recent index?

Do any documents return from that index if you perform an exists query on that index for the host or host_name field?


If you change the index name the .raw value mapping for logstash, it doesn't work as it should. Try not setting the index value in logstash.conf file and also if you are keen on using custom index, create a new mapping/template in elasticsearch for the same.


You didn't change the document_type in the output did you? If not, providing the full config might help.

But they're using logstash-appname-prod-web-access-logs-YYYY.MM.dd which matches the pattern of logstash-.

I don't think I changed it.

Logstash config (same configuration for all nginx log indices:

input {
beats {
port => 5052

filter {
grok {
match => { "message" => '%{IPORHOST:lbip} %{NGUSER:ident} %{NGUSER:auth} [%{HTTPDATE:logdate}] %{NUMBER:response} "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" (?:%{NUMBER:bytes}|-) (?:"(?:%{URI:referrer}|-)"|%{QS:referrer}) %{QS:agent} "%{GREEDYDATA:clientIPs}"' }

date {
    locale => en
    match => [ "logdate", "dd/MMM/YYYY:HH:mm:ss Z" ]
    remove_field => "logdate"

useragent {
    source => "agent"
    target => "useragent"
    remove_field => "agent"

csv {
    # separate client IP from reverse proxy IP
    columns => [ "clientip", "rp_ip" ]
    source => "clientIPs"
    separator => ","

geoip {
    source => "clientip"


output {
elasticsearch {
hosts=> ["",""]
index => "logstash-rostr-prod-web-access-logs-%{+YYYY.MM.dd}"

Here is the definition of my logstash template:

curl -XGET*

It is also not respecting the definintion of the fields being parsed. Clearly, the logstash grok filter shows that reponse and bytes are supposed to come across as NUMBERs but the index configuration page within Kibana shows them as strings. In fact, everything is coming in as strings except for the geoip fields.

You need to mutate.convert them

That won't match anything.

Again, please provide a recent mapping of the field you are interested in from a recent index in Elasticsearch.