Logstash 1.5.4 stops updating the log


(Santublr) #1

Hello,

I'm new to this tool and after downloading and running the log with a simple config file, the logstash stops updating elasticsearch with the below message (Flushing...) when run with --debug.

Also, is there any issue in my Grok Pattern? nothing gets captured in index.

match => { "message" => "^Perforce %{GREEDYDATA} pid %{INT:pid} completed %{DATA: TimeTaken}\s+"}

The log Message is : Perforce server info:
2015/08/18 20:49:02 pid 32430 completed .002s 0+0us 0+0io 0+0net 2260k 0pf

Logstash startup completed:
Flushing {:plugin=>"^Perforce server info:", what=>"previous", negate=>true, periodic_flush=>true, source=>"message", allow_duplicates=>true, stream_identity=>"%{host}.%{path}.%{type}", max_age=>5>, :level=>:debug, :file=>"(eval)", :line=>"16", :method=>"initialize"}
_discover_file_glob: /opt/elasticsearch/logstash-1.5.4/bin/test.log: glob is: ["/opt/elasticsearch/logstash-1.5.4/bin/test.log"] {:level=>:debug, :file=>"filewatch/watch.rb", :line=>"132", :method=>"_discover_file"}

Config file:
input {
file {
path => "/opt/elasticsearch/logstash-1.5.4/bin/test.log"
type => "p4datanew"
start_position => "beginning"
}
}
filter {
multiline {
pattern => "^Perforce server info:"
what => "previous"
negate=> true
}
grok {
match => { "message" => "^Perforce %{GREEDYDATA} pid %{DATA:p4pid} %{WORD:p4user}@%{DATA:p4client} %{IP:p4remoteclient} [%{DATA:p4version}] '%{DATA:p4action}'" }
match => { "message" => "^Perforce %{GREEDYDATA} pid %{INT:pid} completed %{DATA: TimeTaken}\s+"}

}

}

output {
if( [type] =~ "p4datanew" ) {
elasticsearch {
embedded => false
host => "localhost"
index => "logstash-perforce5-%{+YYYY.MM.dd}"
protocol => "http"
port => "9200"
}
}
stdout { codec => rubydebug }
}


(Mark Walkom) #2

Is there data being added to the file?


(Santublr) #3

Yes. the log file has data in it as shown below. Nothing gets added to the index, the logstash-perforce5 index is not getting added.

Perforce server info:
2015/08/18 20:49:02 pid 32430 completed .002s 0+0us 0+0io 0+0net 2260k 0pf
Perforce server info:
2015/08/18 20:49:02 pid 32431 xxxx@yyyys 192.168.1.2 [p4/2010.2/NTX64/295040] 'user-counter zzzhb_test__warning_count'
Perforce server info:
2015/08/18 20:49:02 pid 32432 completed .002s 1+0us 0+0io 0+0net 2260k 0pf

health status index pri rep docs.count docs.deleted store.size pri.store.size
green open logstash-collaborator-2015.08.30 5 0 20 0 128.6kb 128.6kb
green open logstash-collaborator-2015.08.31 5 0 6 0 39kb 39kb
green open logstash-perforce2-2015.08.30 5 0 82 0 85.9kb 85.9kb
green open logstash-perforce1-2015.08.30 5 0 96 0 80.2kb 80.2kb
green open nwbank 5 0 61 0 30.6kb 30.6kb
green open logstash-2015.08.15 5 0 1 0 9.9kb 9.9kb
green open logstash-perforce-2015.08.30 5 0 436 0 123.8kb 123.8kb
green open logstash-2015.08.13 5 0 18 0 53.4kb 53.4kb
green open logstash-2015.08.12 5 0 1 0 9.2kb 9.2kb
green open logstash-perforce3-2015.08.30 5 0 126 0 272.1kb 272.1kb
green open logstash-2015.08.30 5 0 1 0 5.3kb 5.3kb
yellow open .kibana 1 1 4 0 16.9kb 16.9kb


(Mark Walkom) #4

Ok, so other than the Logstash log, and the Elasticsearch index, is there anything being spit out via your stdout section?


(Santublr) #5

No nothing split outside the conf file.

Thanks,
Santosh


(Santublr) #6

As, of ElasticSearch Installation. I just untar the file and started the tool as (bin/elasticsearch -d). I did not set any configuration. These are the only configuration on the El config yaml.

http.port: 9200
http.host:

The updates of json files and curl operations for the same EL works fine. Logstash is not working as expected.


(Santublr) #7

The issue was with the server I was running. I was testing the logstash / elastic search on a 16G RAM system.
After moving the tools to a better server. All started working fine.

Please resolve the issue. Thanks for your support.


(system) #8