LS doesn't index to ES during tutorial

Hello!

I am very very new to this platform so I have very little idea of what I'm doing.

I'm following the LS tutorial and have reached the "Setting Up an Advanced Logstash Pipeline" section.

I have carried out all steps but for some reason LS does not index to ES. When I run curl -XGET http://localhost:9200/_cluster/health?pretty it only returns the .kibana shard.

What could be missing? I haven't made changes to the yml config files apart from setting cluster, node, and network host to 127.0.0.1. I'm working on Windows 7 and I have a compatible JRE running.

Any suggestions would be greatly appreciated!

Thank you!

Andrew

Read the logs, both Logstash and Elasticsearch. I'm sure there are some clues there.

Where does the LS log get generated? There is no log folder for LS in the directory. When I run LS all I see in the console is:

C:\ELK\logstash>bin\logstash -f first-pipeline.conf
io/console not supported; tty will not be manipulated
Settings: Default pipeline workers: 4
Logstash startup completed

ES log doesn't have anything related to LS.

If you start Logstash like that it'll log to the console. You can try upping the log level with --verbose. What inputs do you have? If you have file inputs, make sure you understand sincedb files, when start_position doesn't take effect, etc. Misunderstanding how that works is an extremely common problem for new users.

Thanks! This has given me something to go on. My input looks like this (following the tutorial):

input {
file {
path => "C:\ELK\logstash\logstash-tutorial.log\logstash-tutorial-dataset"
start_position => beginning
}
}

There are no references to sincedb (not sure what that is) since the tutorial doesn't have anything on it but the --verbose log is mentioning it too:

{:timestamp=>"2016-02-24T12:04:12.796000+0100", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"C:\Users\andrew.f.trobec/.sincedb_2e5ed3e8c14d29add4489df138ec14a4", :path=>["C:\ELK\logstash\logstash-tutorial.log\logstash-tutorial-dataset"], :level=>:info}

Could it be that I'm not running CMD as admin? Anyway, I'll investigate further and hopefully find a solution.

Thanks!

Could it be that I'm not running CMD as admin?

No, I can't imagine that has anything to do with this.

Anyway, I'll investigate further and hopefully find a solution.

The file input documentation attempts to explain sincedb.

I have used the variable sincedb_path to set a fixed file location. When launching LS it doesn't write to the file, but when I kill LS it adds a 0 up until four 0s after which it doesn't add any more . Is this file supposed to be written to during LS startup? I'm not really sure what's supposed to happen. For example when I launch LS, should it analyze the file and send it across to ES? For now it doesn't seem like it's doing anything... the verbose log shows and doesn't generate any errors:

{:timestamp=>"2016-02-24T13:56:31.832000+0100", :message=>"Attempting to install template", :manage_template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true, "ignore_above"=>256}}}}}, {"float_fields"=>{"match"=>"*", "match_mapping_type"=>"float", "mapping"=>{"type"=>"float", "doc_values"=>true}}}, {"double_fields"=>{"match"=>"*", "match_mapping_type"=>"double", "mapping"=>{"type"=>"double", "doc_values"=>true}}}, {"byte_fields"=>{"match"=>"*", "match_mapping_type"=>"byte", "mapping"=>{"type"=>"byte", "doc_values"=>true}}}, {"short_fields"=>{"match"=>"*", "match_mapping_type"=>"short", "mapping"=>{"type"=>"short", "doc_values"=>true}}}, {"integer_fields"=>{"match"=>"*", "match_mapping_type"=>"integer", "mapping"=>{"type"=>"integer", "doc_values"=>true}}}, {"long_fields"=>{"match"=>"*", "match_mapping_type"=>"long", "mapping"=>{"type"=>"long", "doc_values"=>true}}}, {"date_fields"=>{"match"=>"*", "match_mapping_type"=>"date", "mapping"=>{"type"=>"date", "doc_values"=>true}}}, {"geo_point_fields"=>{"match"=>"*", "match_mapping_type"=>"geo_point", "mapping"=>{"type"=>"geo_point", "doc_values"=>true}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "doc_values"=>true}, "@version"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip", "doc_values"=>true}, "location"=>{"type"=>"geo_point", "doc_values"=>true}, "latitude"=>{"type"=>"float", "doc_values"=>true}, "longitude"=>{"type"=>"float", "doc_values"=>true}}}}}}}, :level=>:info}
{:timestamp=>"2016-02-24T13:56:31.843000+0100", :message=>"New Elasticsearch output", :class=>"LogStash::Outputs::ElasticSearch", :hosts=>["127.0.0.1"], :level=>:info}
{:timestamp=>"2016-02-24T13:56:32.160000+0100", :message=>"Registering file input", :path=>["C:\\ELK\\logstash\\logstash-tutorial.log\\logstash-tutorial-dataset"], :level=>:info}
{:timestamp=>"2016-02-24T13:56:32.168000+0100", :message=>"Using mapping template from", :path=>nil, :level=>:info}

ES logs show nothing. Do you know of any documentation that explains the expected behaviour?

Thanks for your feedback Magnus, I appreciate your support.

After running a series of tests and finding a tutorial for monitoring Windows system logs I discovered the problem!

The initial log load doesn't happen automatically. In order for LS to push to ES I had to make a change in the log and save it. Once I did that everything was indexed and available to analyze in Kibana!