Sorry for the delay, it started to work! Now I updated my logstash config file and it stopped working again!
I changed the log level to debug as you suggested and now my log file spits out a lot of info that I have no idea what it's doing!
My config file is under /etc/logstash/conf.d/default.conf
service logstash configtest
Where does Logstash look for the config file?
I run logstash as a service as follows:
sudo service logstash start
This results in seeing:
sudo service logstash restart
Killing logstash (pid 25831) with SIGTERM
Waiting logstash (pid 25831) to die...
Waiting logstash (pid 25831) to die...
logstash stopped.
logstash started.
But, the logstash.log says:
{:timestamp=>"2016-04-07T15:19:25.511000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
So it looks like, this pipeline is what ingests the nginx logs and it's not open???
service logstash status
- says "logstash is running"
The contents of these folders are all owned by the 'logstash' user and group:
/var/log/logstash
/opt/logstash
/etc/logstash
On startup, my logstash.log file says:
{:timestamp=>"2016-04-07T15:08:21.282000+0000", :message=>"Registering file input", :path=>["/var/log/nginx/web_pixels.log"], :level=>:info}
{:timestamp=>"2016-04-07T15:08:21.299000+0000", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/var/lib/logstash/.$
{:timestamp=>"2016-04-07T15:08:21.825000+0000", :message=>"Using mapping template from", :path=>nil, :level=>:info}
{:timestamp=>"2016-04-07T15:08:22.365000+0000", :message=>"Attempting to install template", :manage_template=>{"template"=>"logstash-*", "settings"=>{"inde$
{:timestamp=>"2016-04-07T15:08:22.750000+0000", :message=>"New Elasticsearch output", :class=>"LogStash::Outputs::ElasticSearch", :hosts=>["127.0.0.1:9200"$
{:timestamp=>"2016-04-07T15:08:23.626000+0000", :message=>"Using geoip database", :path=>"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-$
{:timestamp=>"2016-04-07T15:08:24.469000+0000", :message=>"Starting pipeline", :id=>"base", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_i$
{:timestamp=>"2016-04-07T15:08:24.519000+0000", :message=>"Pipeline started", :level=>:info}
Please help!
Here's my config file:
input {
#stdin {}
file {
path => "/var/log/nginx/web_pixels.log"
type => "web"
}
}
output {
#stdout { codec => rubydebug }
elasticsearch {
hosts => ["127.0.0.1:9200"]
document_id => "%{[@metadata][doc_id]}"
#document_type => "%{pr}"
}
}
filter {
kv{}
kv {
field_split => "&?"
exclude_keys => ["rv"]
source => "args"
}
date {
match => [ "logdate", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
fingerprint {
source => [ "message", "type"]
key => "my-key"
concatenate_sources => true
target => "[@metadata][doc_id]"
}
geoip {
source => "clientip"
#fields => ["city_name", "country_name", "timezone", "country_code2"]
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
remove_field => [ "message", "args", "countrycode", "city" ]
convert => [ "[geoip][coordinates]", "float"]
}
useragent {
source => "agent"
prefix => "ua_"
}
}