Cannot Index Log Files into Elasticsearch using Logstash

Dear ELK/Grok Experts,

First of all, I'm new to ELK. I'm reading/processing 3 log files together from a single Logstash conf file (called "textlogs.conf", see below) and using custom Grok patterns I'm trying to index those log files into Elasticsearch so that I can visualize them in Kibana. The versions of ELK I'm using are respectively elasticsearch-1.6.0, logstash-1.5.2, and kibana-4.1.1-windows. But somehow I can't get logstash connect to elasticsearch, because the conf file doesn't index anything. I have checked each of my custom Grok patterns in grokdebug.herokuapp.com website, and they work perfectly. So there's no issue with the custom Grok patterns. However, I was using logstash-1.5.1 until last week, when I upgraded it to logstash-1.5.2, and I've been having this problem ever since. Out of frustration, I went so far as to delete the all my instances (all previous indices, including .kibana), deleted all ELK folders, and re-downloaded and unzipped ELK, rebooted my computer, and started from scratch. It's still showing no index except .kibana. I wonder whether there's any bug in the conf file (albeit the clean/perfectly defined Grok patterns), or it has to do with cluster/upgrade errors. I would appreciate if you have any solution/thoughts on this. Please see below for Conf file and screenshots of all my runs:

Thank you so much!

Regards,
Ahmad

file: "textlogs.conf"

input {
    file {
           type => "total_messages_per_server"
           path => "C:\Users\ahmadmar\Documents\ELK\VM_Work\Email_Dashboard\text-logs\report_1_total_messages_per_server.log"
    }
    file {
          type => "total_messages_per_sender_address"
          path => "C:\Users\ahmadmar\Documents\ELK\VM_Work\Email_Dashboard\text-logs\report_9_total_messages_per_sender_address_top10.log"
    }
    file {
         type => "distribution_sent_emails_general"
         path => "C:\Users\ahmadmar\Documents\ELK\VM_Work\Email_Dashboard\text-logs\report_distribution_sent_emails_general.log"
  }
}
filter {
     if [type] == "total_messages_per_server" {
            grok {
                  match => { "message" => "%{DATA:server}\t%{NUMBER:total_messages_server}\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})" }
            }
    }
    if [type] == "total_messages_per_sender_address" {
            grok {
                  match => { "message" => "(?[a-zA-Z0-9_.+-=:]+@[a-zA-Z0-9_.+-=:]+)\t%{NUMBER:total_messages}\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})" }
            }
    }
    if [type] == "distribution_sent_emails_general" {
            grok {
                  match => { "message" => "(?%{WORD}|%{NUMBER})\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})"}
            }
    }
}
output {  
        if ([type] == "total_messages_per_server" or [type] == "total_messages_per_sender_address" or [type] == "distribution_sent_emails_general")
        {
                  elasticsearch {
                                  host => "localhost"
                                  index => "testlogs"
                    }
                  stdout {
                            codec => rubydebug
                  }
        }
}  


----------

![|690x374](upload://5BU22xFqoG19KJpn6OdX5hpeOnC.PNG) 
![|690x371](upload://w1u1CTGSCdOBff77p4Y24K5hDhl.PNG) 
![|690x365](upload://oXXzFkH39QHnt0pKylAIfdp9jl5.PNG) 

I'm using Sense (Google Chrome Extension for Elasticsearch) for my curl requests. Here are the screenshots of the curl results:
![|690x233](upload://adZi4onfxEsd6ysFpFUg7RqKINm.PNG) 
![|690x146](upload://mwQClqdIs3cPLwNtMvPcgfSXLmE.PNG) 
![|690x136](upload://lv1z4LNqASvp82wZYnheott7dbR.PNG) 
![|690x275](upload://yz6tzzkzbbAA1LABPhDUip5TWmY.PNG) 
![|325x500](upload://aCjpCGIUdKHwiRJdqTFaAWz0Qez.PNG) 
![|519x500](upload://2VSBjvQxdLg4acKm5R36iBYVv8J.PNG)

I would check for the existence of $HOME/.sincedb. Logstash keeps track of the current position in the log file. If its there, then I'd delete it and restart logstash and see if that fixes it.

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-sincedb_path

If not I would start logstash with -d for debug ouput, you may see more information as to the processing of your log files.

It's much easier if you don't post screenshots of text, just paste the text itself and format it as code.

Hi Mike, I did check for $HOME/.sincedb and deleted all instances and restarted Logstash and Elasticsearch according to your suggestions. Now I get a "connection timeout error". Here are the screenshots when I run Elasticsearch.bat and Logstash.bat ---


Any thoughts on what went wrong? By the way, I have checked my $PATH and $JAVA_HOME and they're correct. Also, the 'logstast -d' option doesn't seem to be recognized by Logstash, trying the "-d" option yields the following error message:


Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
io/console not supported; tty will not be manipulated
Clamp::UsageError: Unrecognised option '-d'
  signal_usage_error at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:103
         find_option at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/option/parsing.rb:62
       parse_options at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/option/parsing.rb:28
               parse at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:52
                 run at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/runner.rb:80
                call at org/jruby/RubyProc.java:271
                 run at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/runner.rb:96
                call at org/jruby/RubyProc.java:271
          initialize at C:/Users/ahmadmar/logstash-1.5.2/vendor/bundle/jruby/1.9/gems/stud-0.0.20/lib/stud/task.rb:12

If you have something like Windows Firewall enabled, I suggest you disable it. It could be blocking port 9300.

Is it creating new node with name starts with logstash-? rather that creating index with the same name?

because even i have faced this issue when i was using new version of logstash. below options you can try out here.

  1. Just try indexing only 1 log file and instead of pushing the data to elasticsearch node with index testlogs just print it on console using stdout { codec=>rubydebug } only. see if you are getting output on console.
    If above option is still not giving you any results then surely the problem lies within your input or filter part.

Let me know if you able to solve this issue.

Thanks,
Rohan

You probably need to define a cluster name in the output.