Template fields are missing fields on Kibana

Hi,
I am new on Elasticsearch and I am having some troubles with accessing certain fields, in Kibana(4.1.0).
(Therefore, my problem might be being caused by an obvious mistake.)
I am using logstash (1.5.0) to load logs into elasticsearch(1.6.0). I am also using a template to do so. [template in the end of this post]

Considering templates are applied to new data, I would expect my logs to get loaded using it. And I think they are.
The problem is when I access Kibana, even though all fields appear on the settings, in the Discover tab, when I try to see some of them, they are considered "missing fields" and thus I cannot access them.
Could the problem be on my template? Am I doing something absurdly wrong?
I have searched online for similar issues, but I couldn't find anything, so I would really appreciate some help.

template:
curl -XPUT 'http://localhost:9200/_template/billing' -d '
{
"order":0,
"template":"dcache-billing-",
"settings":{
"index.refresh_interval":"5s"
},
"mappings":{
"default":{
"dynamic_templates":[
{
"string_fields":{
"mapping":{
"index":"analyzed",
"omit_norms":true,
"type":"string",
"fields":{
"raw":{
"index":"not_analyzed",
"ignore_above":256,
"type":"string"
}
}
},
"match_mapping_type":"string",
"match":"
"
}
}
],
"properties":{
"geoip":{
"dynamic":true,
"path":"full",
"properties":{
"location":{
"type":"geo_point"
}
},
"type":"object"
},
"@version":{
"index":"not_analyzed",
"type":"string"
},
"pool_name.raw":{
"index":"not_analyzed",
"type":"string"
},
"sunit.raw":{
"index":"not_analyzed",
"type":"string"
}
},
"_all":{
"enabled":true
}
}
}
}
'

logstash config file
input {
file {
path => "/nethome/beatriz/Downloads/small/billing-*"
#sincedb_path => "/var/tmp/sincedb-dcache"
# uncomment next line if you want to import existing data
start_position => beginning
type => "dcache-billing"
}
}

filter {

if "RemoveFiles=" in [message] {

Because RemoveFiles= is the only(source needed) non-conforming event.

grok {
  patterns_dir => "/etc/logstash/patterns"
  match => [ "message", "%{REMOVE_ON_POOL}" ]
  named_captures_only => true
  tag_on_failure => [ "_parse_dcache_failure10" ]
} # End of grok
mutate {
  split => [ "pnfsids", "," ]
  add_tag => [ "dcache_billing_removed" ]
} # End of Mutate to make a real list of the entries in pnfsids

} else {

grok {
patterns_dir => "/etc/logstash/patterns"
match => [ "message", "%{TRANSFER_CLASSIC}" ]
match => [ "message", "%{STORE_CLASSIC}" ]
match => [ "message", "%{RESTORE_CLASSIC}" ]
match => [ "message", "%{REQUEST_CLASSIC}" ]
match => [ "message", "%{REQUEST_DCAP}" ]
match => [ "message", "%{REMOVE_CLASSIC}" ]
match => [ "message", "%{REMOVE_SRM}" ]
named_captures_only => true
remove_field => [ "message" ]
tag_on_failure => [ "_parse_dcache_failure00" ]
}

} # End of if else

date {
match => [ "billing_time", "MM.dd HH:mm:ss" ]
timezone => "CET"
remove_field => [ "billing_time" ]
}

alter {
condrewrite => [
"is_write", "true", "write",
"is_write", "false", "read"
]
}
}

output {
elasticsearch {
host => localhost
index => "dcache-billing-%{+YYYY.MM.dd}"
template_name => "billing"
protocol => "http"
}
}


Thanks in advance,
Beatriz Mano

Have you changed the mapping since you added the indices into KB? If so you may need to refresh the field information that it loads.

No, I have not. Should I change the mapping?

Sorry I may have misunderstood you.

In KB4 under Settings, find the index you setup and refresh the field list and see if that helps.

I have tried it. it does not work. Do you have any idea what could I being doing wrong?

In what kind of file to you write this and where is it located?