Stopwords keep existing after indexing data

Hi,

I'm new on dealing with stopwords and even though I've read and tried what has been written on the page about "Using Stopwords", I can't get it working.

I have a csv-file containing two columns: email_category and email_content. I'm indexing this file with Logstash and its csv-filter. Before I index it with Logstash I have written a mapping in Sense and it looks like this:

PUT /emails
{
    "settings" : {
        "analysis": {
            "analyzer": {
                "my_analyzer": { 
                    "type": "dutch",
                    "stopwords": "_none_"

                }
            }
        }
    },
    "mappings" : {
        "emails" : {
            "properties" : {
                "email_category" : { 
                    "type" : "string",
                    "index": "not_analyzed"
                },
                "email_content" : { 
                    "type" : "string",
                    "index": "analyzed",
                    "analyzer": "my_analyzer"
                }
            }
        }
    }
 }

Any ideas why the content keeps having dutch stopwords? It is not removing anything.

Never mind. I did not know that stopwords remain visible when you are in "Discover" in Kibana but when you are visualizing in Kibana then those words are not being counted.