Logstash Index error : [logstash-*] IndexNotFoundException[no such index]

Hello,
I am new for ELK.
I am using :
- elasticsearch-2.1.0
- logstash-2.1.1
- kibana-4.3.0-windows
I tried to configure ELK to monitoring my application logs and I followed different tutorials and different logstash configuration, but I am getting all the time this error :
[logstash-*] IndexNotFoundException[no such index]

This is my logstash config:

input {
file {
path => "/var/logs/*.log"
type => "syslog"
}
}
filter {
grok {
match => [ "message", "%{COMBINEDAPACHELOG}" ]
}
}
output {
elasticsearch { hosts => localhost }
stdout { codec => rubydebug }
}

I am looking for a very basic configuration, just for start to work with.
Could someone give a clue?
Every kind of tips are appreciated

Regards
Carmelo

Where is that error occuring?

I am getting this error when I switch on kibana, and it send the request to the elasticsearch.

I tried to create a several Index for logstash, but no one was working to send the correct index to elasticsearch.

I am using logback and this is the pattern to write my logs:
%d{dd-MM-yy kk:mm:ss.SSS} %X{UUID} %X{userId} %-5level %logger - %msg%n

Every kind of help is appreciate

Regards
Carmelo

What does the output from show?
Is there data in ES? Check with _cat/indices.

from : http://localhost:9200/_cat/indices

this is the result :

yellow open .kibana 1 1 1 0 3.1kb 3.1kb

So there is no LS data in ES and you need to sort out your LS output and make sure that works.

From what I can see you do have problems with is, I'd suggest you double check the correct syntax here - https://www.elastic.co/guide/en/logstash/2.1/plugins-outputs-elasticsearch.html

Thank you,
I will do it.

I tried to use this plugin:
http://localhost:9200/_plugin/head/

but I do not know how can I create a new index and connect it with my log files.

Hello,
I deleted all folder and re install

  • elasticsearch-2.1.1
  • logstash-2.1.1
  • kibana-4.3.0-windows

I tried to follow this tutorial step by step:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html

Then I didn't received any kind of index, and I got again the index Error from kibana to elasticsearch

Any help ?

Regards
Carmelo

Reinstalling won't fix it.

Does data actually exist in /var/logs/*.log? Have you tried using stdin in the input and then manually entering data?

Yes, I tried this :
logstash -e 'input { stdin { } } output { stdout {} }'
and it is working fine.

And I added this in my output:
output { elasticsearch { hosts => ["localhost:9200"] } stdout { codec => rubydebug } }
But still doesn't work. because here:
http://localhost:9200/_cat/indices
I have only this :
yellow open .kibana 1 1 1 0 3.1kb 3.1kb

I tried the same steps in Ubuntu and it is working immediately.

Thank you for your time

Errata codices:
I tried the same steps in Ubuntu and it was working.
Than I deleted the index in elasticsearch with :
curl -XDELETE http://localhost:9200/logstash-2015.12.30/
and try to recreate it with a different config file and logstash wasn't sent the new index to the elasticsearch.

someone know why ?

Hi
I am in the same situation and having the same problem on windows.
I followed the instructions and they don't work.
Logstash is not creating the index in Elasticsearch.
Why?

I don't know why.

To make logstash to read and process your input every time you run logstash, use "sincedb_path" option to /dev/null (cit.)

but I found this solution :
input { file { path => "/path/to/logstash-tutorial.log" start_position => beginning sincedb_path => "/dev/null" } }
and it is working

Thank you.
I did
sincedb_path => "/dev/null"
and Logstash created the index in Elasticsearch.

However, Logstash keeps reading the file and sending it. As if in a loop.
I made the input file with oneline.
Now I have a thousand identical lines (hits) in Elasticsearch

I tried only this example
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
with their logs, and it is working fine.

Now I am starting to "play" with different logs.
I let you know

yes.
I am trying this example.
I used the one line they provided.
When set
sincedb_path => "/dev/null"
Logstash kept on sending the content again and again
because in Windows there is no /dev/null.

I tried
sincedb_path => "nul"
and it works so far.

You both need to understand that sincedb keeps track of where LS has processed in any file that it reads and by setting that to /dev/null you are implying that you don't want to track the progress.

Hi @warkolm,
Please let me share what I infer from you.
when set sincedb at nul then every time Logstash is run it will start reading from the beginning of the file. They will cause duplicate entries.
Right?

The reason I am touching sincedb is that I could not get Logstash to create an index in Elasticsearch.
Any suggestion to solve this problem?

If you set it to /dev/null it will.