hi everyone,
my test.log contents like bellow:"AsynExecutor.contractSyncScheduler>>>> 2016-03-17 04:00:03,247 INFO [cn.cmri.pds.neusoft.ContractSyncService] - <【PageInquiryAccountBalanceContectSrv】,success>"
my conf file as bellow:
"input {
file{
path =>["/opt/pds/log/test.log"]
type => "pds"
}
}
filter {
if [type = pds ]{
grok {
match => { "pds" => "%{DATA:scheduler}>>>> %{DATA:date} %{DATA:loglevel} %{DATA:service}-<%{DATA:Account},%{DATA:status}>" }
}
}
}
output{
stdout { codec => rubydebug }
elasticsearch{
hosts =>["localhost:9200"]
index => "test_log"
}
}
I can not find anything in debug info.BTW, the grok.conf is no error in running. anybody can help? thanks!
hi magnus,
I change the input stdin and the result like this
Logstash startup completed
{
"message" => "AsynExecutor.contractSyncScheduler>>>> 2016-03-17 04:00:03,247 INFO [cn.cmri.pds.neusoft.ContractSyncService] - <【PageInquiryAccountBalanceContectSrv】,success>",
"@version" => "1",
"@timestamp" => "2016-03-18T07:32:26.239Z",
"host" => "ubuntu",
"tags" => [
[0] "_grokparsefailure"
]
}
this is not I want to use.
I change the grok like bellow:
%{exact:scheduler}>>>> %{DATESTAMP:date} %{exact:LOGLEVEL} %{exact:service}-<%{exact:Account},%{exact:status}>
I running and the result is error:
Settings: Default pipeline workers: 4
The error reported is:
pattern %{exact:scheduler} not defined
How can I using the exact patterns corecctly? thanks!
I didn't mean that you should use a pattern named "exact". I meant that you should use patterns that are more exact. For example, use a timestamp pattern like TIMESTAMP_ISO8601 to match your timestamp and the LOGLEVEL pattern to match your log level. See https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns for the available patterns (the most commonly used ones are in the grok-patterns file).
I test the patterns %{DATESTAMP:logTime} %{LOGLEVEL:info} for " 2016-03-17 04:00:03,247 INFO" which has passed!
the left I have no ide
the "AsynExecutor.contractSyncScheduler>>>> passed [cn.cmri.pds.neusoft.ContractSyncService] - <【PageInquiryAccountBalanceContectSrv】,success>
"
}
output{
stdout { codec => rubydebug }
elasticsearch{
hosts =>["localhost:9200"]
index => "test_log"
}
}
when I run ./logstash -f conf/grok.conf, I can't found the index at the directory /usr/pds/elasticsearch-2.2.0/data/elasticsearch/nodes. any help?
when I run ./logstash -f conf/grok.conf, I can't found the index at the directory /usr/pds/elasticsearch-2.2.0/data/elasticsearch/nodes. any help?
Use the APIs to list indexes. Don't inspect the file system.
Is new data being added to test.log? Or do you expect Logstash to run the file from the beginning? If yes, read about the file input's start_position option.
Hi Magnus,
I have just started exploring Logstash.
Currently trying to load a log file to elasticsearch, but it's taking a long time to create index.
Please help me understand how can we tune the config of the logstash.
File content is :
INFO : LM_36435 [Thu Apr 21 23:27:16 2016] : (20731|-1449096896) Starting execution of workflow [wf_test] in folder [Folder1] last saved by user [admin].
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.