# Cannot Index Log Files into Elasticsearch using Logstash

Dear ELK/Grok Experts,

First of all, I'm new to ELK. I'm reading/processing 3 log files together from a single Logstash conf file (called "textlogs.conf", see below) and using custom Grok patterns I'm trying to index those log files into Elasticsearch so that I can visualize them in Kibana. The versions of ELK I'm using are respectively elasticsearch-1.6.0, logstash-1.5.2, and kibana-4.1.1-windows. But somehow I can't get logstash connect to elasticsearch, because the conf file doesn't index anything. I have checked each of my custom Grok patterns in grokdebug.herokuapp.com website, and they work perfectly. So there's no issue with the custom Grok patterns. However, I was using logstash-1.5.1 until last week, when I upgraded it to logstash-1.5.2, and I've been having this problem ever since. Out of frustration, I went so far as to delete the all my instances (all previous indices, including .kibana), deleted all ELK folders, and re-downloaded and unzipped ELK, rebooted my computer, and started from scratch. It's still showing no index except .kibana. I wonder whether there's any bug in the conf file (albeit the clean/perfectly defined Grok patterns), or it has to do with cluster/upgrade errors. I would appreciate if you have any solution/thoughts on this. Please see below for Conf file and screenshots of all my runs:

Thank you so much!

Regards,

file: "textlogs.conf"

input {
file {
type => "total_messages_per_server"
}
file {
}
file {
type => "distribution_sent_emails_general"
}
}
filter {
if [type] == "total_messages_per_server" {
grok {
match => { "message" => "%{DATA:server}\t%{NUMBER:total_messages_server}\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})" }
}
}
grok {
match => { "message" => "(?[a-zA-Z0-9_.+-=:]+@[a-zA-Z0-9_.+-=:]+)\t%{NUMBER:total_messages}\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})" }
}
}
if [type] == "distribution_sent_emails_general" {
grok {
match => { "message" => "(?%{WORD}|%{NUMBER})\t(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY})"}
}
}
}
output {
if ([type] == "total_messages_per_server" or [type] == "total_messages_per_sender_address" or [type] == "distribution_sent_emails_general")
{
elasticsearch {
host => "localhost"
index => "testlogs"
}
stdout {
codec => rubydebug
}
}
}

----------

7.PNG1328x720 91.8 KB

8.PNG1332x717 93.7 KB

9.PNG1335x708 41 KB

I'm using Sense (Google Chrome Extension for Elasticsearch) for my curl requests. Here are the screenshots of the curl results:

1.PNG1145x388 52.8 KB

2.PNG1215x258 37.4 KB

3.PNG1191x236 37 KB

4.PNG1036x414 53.4 KB

5.PNG677x1039 88.5 KB

6.PNG666x641 51.5 KB


(Mike Simos) #2

I would check for the existence of $HOME/.sincedb. Logstash keeps track of the current position in the log file. If its there, then I'd delete it and restart logstash and see if that fixes it. https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-sincedb_path If not I would start logstash with -d for debug ouput, you may see more information as to the processing of your log files. (Mark Walkom) #4 It's much easier if you don't post screenshots of text, just paste the text itself and format it as code. (Ahmad Maruf) #5 Hi Mike, I did check for$HOME/.sincedb and deleted all instances and restarted Logstash and Elasticsearch according to your suggestions. Now I get a "connection timeout error". Here are the screenshots when I run Elasticsearch.bat and Logstash.bat ---

Any thoughts on what went wrong? By the way, I have checked my $PATH and$JAVA_HOME and they're correct. Also, the 'logstast -d' option doesn't seem to be recognized by Logstash, trying the "-d" option yields the following error message:


Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
io/console not supported; tty will not be manipulated
Clamp::UsageError: Unrecognised option '-d'
call at org/jruby/RubyProc.java:271
call at org/jruby/RubyProc.java:271


(Mike Simos) #6

If you have something like Windows Firewall enabled, I suggest you disable it. It could be blocking port 9300.

(R01K) #7

Is it creating new node with name starts with logstash-? rather that creating index with the same name?

because even i have faced this issue when i was using new version of logstash. below options you can try out here.

1. Just try indexing only 1 log file and instead of pushing the data to elasticsearch node with index testlogs just print it on console using stdout { codec=>rubydebug } only. see if you are getting output on console.
If above option is still not giving you any results then surely the problem lies within your input or filter part.

Let me know if you able to solve this issue.

Thanks,
Rohan

(Mark Walkom) #8

You probably need to define a cluster name in the output.

(system) #9