Logstash not showing any output


(Tony) #1

Hi
I am a student working on a project and decided to use ELK Stack as a logging tool to present its value.
I am using SUSE Linux Enterprise Server 12 SP3 (x86_64) as OS and running it on a virtual machine.

I need a bit of help understanding how logstash works. Right now I have set it up so it should take my apache2 logs and send it to elasticsearch. But nothing gets through, I have tested with your examples with the bank and everything worked as it should, I was able to find the bank data through kibana and create an index for it. But when I moved on to another example, setting up for apache data nothing happens. I've checked the logs but nothing seems to be wrong.

Here is the simple.config I am using for logstash:

input {
  file {
    path => "/var/log/apache2/access_log"
    start_position => "beginning"
  }
}
filter {

    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
    date {
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
  hosts => ["localhost:9200"]
  }
 }

and the logstash logs looks as following:

[2018-02-21T13:10:46,658][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.0"}
[2018-02-21T13:10:47,118][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-02-21T13:10:48,068][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-02-21T13:10:48,324][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-02-21T13:10:48,326][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-02-21T13:10:48,403][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-02-21T13:10:48,445][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-02-21T13:10:48,445][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-02-21T13:10:48,448][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-02-21T13:10:48,450][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-02-21T13:10:48,457][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-02-21T13:10:48,732][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x13f51a3@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[2018-02-21T13:10:48,749][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}

Otherwise most of everything is kept at default, Logstash, Kibana and Elasticsearch is all installed on the same machine.
Anyone has any idea why I cant find the apach2 logs in Kibana?


(Paris Mermigkas) #2

A common issue with not seeing events from a file input is that Logstash has already read that file. Logstash keeps a track of all files it has processed and the latest offset for each file which stores it in a since_db file.

So if you try and reprocess an already processed file, Logstash actually knows it has already read it and skips it. Check if such a file exists in the default path and delete it if so, see if that resolves the issue.


(Tony) #3

I couldn't find since_db file anywhere, I checked the defaul path which is Usr/share/logstash but no results, I even tried the find /-name since_db and it came up empty. Do you have any other suggestions?
Edit1:
I've just tried to create a new file in my home catalog called taccess_log and copied in some example data. Changed the file path in the conf file and it worked, but now when i change the path to my actual apache2 logs which is in var/log/apache2 and it still dosen't work.

Also, after this change, I tried to find the since_db file again with still no result.. Either its hidden somewere, or something is amiss


(Paris Mermigkas) #4

Documentation suggests that it's under <path.data>/plugins/inputs/file, but you can always provide your own path in the config and it should spawn a new file which you will be able to control better.


(Tony) #5

Thank you for the assistans, we have solved the problem, we checked the chmod on the access_log file, but not on the apache2 directory..

So it was a chmod problem afterall.

It now reads the correct file and inputs data as we wished.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.