How to get rid of wrongly named index that must be lower case

Wrong index name esm_DMZ_results was put into Logstash config file /opt/logstash/first-pipeline.conf . After that elasticsearch log /data/elastic/logs/elastic_concept.log start showing "Invalid index name [esm_DMZ_results], must be lowercase" . I renamed index to lower case,
cleared all cache: curl -XPOST 'http://wsp02051056wss.nam.nsroot.net:9200/_cache/clear'
checked on renamed index: curl -XGET 'http://wsp02051056wss.nam.nsroot.net:9200/esm_dmz_results/ '
verified that old index does not exist: curl -XGET 'http://wsp02051056wss.nam.nsroot.net:9200/esm_DMZ_results/ '
checked on aliases: curl http://wsp02051056wss.nam.nsroot.net:9200/_aliases
and list all indexes I have: curl 'wsp02051056wss.nam.nsroot.net:9200/_cat/indices?v'

Old index is not there, but the same error message in the log keeps coming.

What am I missing, how to get rid of old index name?

Could this error be the reason why I do not see anything on Kibana?

What does your Logstash config look like? Particularly the es-output.

Here is complete config. For testing I have stdout there.

Separate issue is that even when I have both, "start_position => beginning" and "ignore_older => 0" in input clause, only new entries in the file are processed, not the whole file from the beginning.

/opt/logstash/first-pipeline.conf
input {
file {
path => "/data/elastic/data/DMZ_events.csv"
start_position => beginning

}

}
filter {
csv {
columns => [
"Region",
"Starttime",
"Finishtime",
"PolicyName",
"DomainName",
"Agentname",
"Managername",
"Title",
"Namevalue",
"information",
"Contact_Group",
"Primary_contact",
"Primary_Contact_SOEID",
"Secondary_Contact",
"Secondary_Contact_SOEID",
"OS_Code",
"Tier"
]
separator => ","
remove_field => ["DomainName"]
}
}
output {
elasticsearch {
hosts => ["wsp02051056wss.nam.nsroot.net:9200"]
action => "index"
index => "esm_dmz_results"
}
stdout { }
}

Is it possible you have a Logstash instance still running with the old config?

Also, moved to Logstash.

No-no, Glen. I was restarting processes. I did not understand your last suggestion ( "move to logstash" ).

Hehe. I was only remarking that I moved the thread to the "Logstash" category, because it seems more relevant to your issue.

What response do you get if you post a document to the index directly?

POST http://wsp02051056wss.nam.nsroot.net:9200/esm_dmz_results/test_data
{
    "Region": "Foo"
}

curl -XPOST 'http://wsp02051056wss.nam.nsroot.net:9200/esm_dmz_results/test_data { "Region": "Foo" }'

curl: (52) Empty reply from server

Note that I did found already how to retrieve this index in Kibana web interface. This cancels my question “Could this error be the reason why I do not see anything on Kibana?”
I still would appreciate advise how to get rid of the errors in the log related to wrong incorrect index name “esm_DMZ_results”.

Thank you.

Ivan

If you can sort out why you can't curl the request, and then report how Elasticsearch responds to the request, that will be a step in the right direction.

This query, for example :
curl -XGET 'http://wsp02051056wss.nam.nsroot.net:9200/esm_dmz_results/ '
or that:
curl -XPOST 'http://wsp02051056wss.nam.nsroot.net:9200/esm_dmz_results/_search?pretty' -d '{ "query": {"match_all": {} } } '
give valid results. I am newbie and not sure of the syntax, but looks like your test statement should be modified.

Thank you.

Ivan

Sorry. I provided Sense format.

When converting it to a curl command, you shouldn't include the body in the quotes with the URL, you should pass it as binary data, as you did the match_all query in your comment.

I hope that helps!