Since last 2 days i am busy with ELK. I have successfully installed Elasticsearch 5.1.1, Logstash 5.1.1 and Kibana 5.1.1 on my local windows VM and set up logstash configuration as it can parse IIS Log files through grok filter. All three are running as windows service and working fine. i can see the logs in kibana default *logstash index pattern. so far so good.
What i want to achieve ?
I have a FTP Logs downloading job which download asp.net applications log to my VM. I am saving logs per application is its own folder named as application.
Now i want to configure Logstash as it can show me on first each application name for selection (with checkbox). when a user want to show logs for MyWeb1 and MyBackend1 than he should check both application checkboxes and logstash show logs of both applications together, so that he can debug/search information in specific application logs.
How can we build up this use case in Logstash?
I have not installed any beats or file receiver yet? do i need to install any beat or plugin?
I ll be thankful for any quick response in this regard
Use Logstash filters to create fields for the application name. You could e.g. use a grok filter to extract the application name from the filename, or if the same information is available inside the log file you can pick it up from there.
Once the application name is in a field you can filter out what you want in Kibana.
If you want to build your own GUI with checkboxes that is of course also doable.
thanks for quick feedback. I have created a new logstash conf file for one of my application logs. i have added new field of ApplicationName and i can see the logs along with ApplicationName.
in my logstash installation directory i have created a conf folder where i have 3 conf files now.
logstash.conf
iis.conf
myapp.conf
my logstash windows service using argument to use this conf folder, so all conf files in it will be used from logstash. Now how can i create a named index mapping pattern "myapp" in elasticsearch so that i can later create the same index pattern "myapp-*" in kibana to display the log entries of specific application?
I want to habe a index pattern in Kibana for each configuration. is it possible?
Now how can i create a named index mapping pattern "myapp" in elasticsearch so that i can later create the same index pattern "myapp-*" in kibana to display the log entries of specific application?
Just configure your elasticsearch output(s) in Logstash to send to whatever index you want. The indexes will be created as needed. Once they exist you can set up index patterns for them in Kibana.
Thanks again for quick response. I have tried to create a elasticsearch ouput with index "myapp" but after that nothing working at all. even default logstash-* index is also not working. currently i have two issues. one is elasticsearch does not create indexes and second is my config for multiline exception containing logs.
here is my default configuration. logstash.conf
this should work with logstash-* index but unfortunatly elasticsearch does not create any index with logstash-* name.
input {
stdin { }
file {
type => "ApplicationEntLib"
path => "D:/logs/main/Api.Ohcp/myapp/*ApplicationEntLib*.log"
start_position => "beginning"
}
}
filter {
multiline {
pattern => "^\;"
negate => true
what => "previous"
}
mutate {
gsub => ['message', "\n", " "]
gsub => ['message', "\t", " "]
remove_field => [ "log_timestamp"]
}
grok {
# check that fields match your IIS log settings
#patterns_dir => "../custompatterns
match => ["message", "(?m)%{WORD:LOGLEVEL}\;%{WORD:Machine}\;%{DATESTAMP:time}\;%{WORD:Area}\=;%{WORD:SubArea}\=;%{WORD:SessionId}\=;%{WORD:StepId}\;%{WORD:User}\=;%{GREEDYDATA:message}\="]
add_field => { "ApplicationName" => "myapp" }
remove_tag => ["_grokparsefailure"]
remove_field => ["clientHostname"]
remove_field => ["_type"]
remove_field => ["_score"]
}
#Set the Event Timesteamp from the log
date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}
}
# See documentation for different protocols:
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "myapp-%{+YYYY.MM.dd}"}
stdout { codec => rubydebug }
}
Here it must work with "myapp-*" index but it has some error on multiline filter property. i am using multiline for exception trace inside logfile. as exception is mostly consists of multiple line.
this is how my log looks like
Verbose;mycomputer;19.12.2016 15:11:34,967;Area=;SubArea=;SessionId=;StepId;User=;Message=QuoteRepository: START gettestmethod with policeNumber:asasasas
Error;mycomputer;19.12.2016 15:12:51,361;Area=;SubArea=;SessionId=;StepId;User=;Message=Services:Exception: System.Exception: testmethod service error(s): 1007
asaksjasas ölakslaks asö. line 253
bla bla bla :line 1261
at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
InnerException:
asaksjasas ölakslaks asö. line 253
bla bla bla :line 1261
at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
InnerException: asaksjasas ölakslaks asö. line 253
bla bla bla :line 1261
at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
Verbose;mycomputer;19.12.2016 15:57:34,930;Area=;SubArea=;SessionId=;StepId;User=;Message=ApplicantManager: START gettestmethod with policeNumber: asasasas
these are 3 log statements where line 2 contains an exception which end up at just before Verbose; in next log Statement. all lines are starting from position zero. whats wrong with my configuration and multiline filter?
Its true that content of logfiles are not that good. i am using three config files. i am using windows VM so its installed on C:\ELK\Logstash*, C:\ELK\Elasticsearch* C:\ELK\Kibana*.
Under Logstash i have a conf folder with three config files which my Logstash windows service is using. here is a step by step tutorial which i have followed yet.
input {
stdin { }
file {
type => "iis"
path => "C:/inetpub/logs/LogFiles/W3SVC*/*.log"
start_position => "beginning"
}
}
filter {
#ignore log comments
if [message] =~ "^#" {
drop {}
}
grok {
# check that fields match your IIS log settings
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
}
#Set the Event Timesteamp from the log
date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}
## If the log record has a value for 'bytesSent', then add a new field
# to the event that converts it to kilobytes
#
if [bytesSent] {
ruby {
code => "event['kilobytesSent'] = event['bytesSent'].to_i / 1024.0"
}
}
## Do the same conversion for the bytes received value
#
if [bytesReceived] {
ruby {
code => "event['kilobytesReceived'] = event['bytesReceived'].to_i / 1024.0"
}
}
## Perform some mutations on the records to prep them for Elastic
#
mutate {
## Convert some fields from strings to integers
#
convert => ["bytesSent", "integer"]
convert => ["bytesReceived", "integer"]
convert => ["timetaken", "integer"]
## Create a new field for the reverse DNS lookup below
#
add_field => { "clientHostname" => "%{clientIP}" }
## Finally remove the original log_timestamp field since the event will
# have the proper date on it
#
remove_field => [ "log_timestamp"]
}
useragent {
source=> "useragent"
prefix=> "browser"
}
## Do a reverse lookup on the client IP to get their hostname.
#
dns {
## Now that we've copied the clientIP into a new field we can
# simply replace it here using a reverse lookup
#
action => "replace"
reverse => ["clientHostname"]
}
}
# See documentation for different protocols:
# http://logstash.net/docs/1.4.2/outputs/elasticsearch
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "iis-%{+YYYY.MM.dd}"}
stdout { codec => rubydebug }
}
yesterday it was working with these configuration but under logstash-* index. but later on when i have deleted all existing elasticsearch indexes and than restarted my logstash and elasticsearch services, but it has not created any indexes. only a .Kibana index is available in elasticsearch index folder.
It's important to understand that Logstash has a single event pipeline. Feel free to split your configuration into multiple files, but they effectively work as is they were in a single file. In other words, the events from all events will be sent to all filters and all outputs. If you want to restrict this you need to wrap filters and outputs in conditionals, e.g. like this:
It means i can use different config files but inside output i must use condition for each config file.
In this way Logstash can create multiple indexes like iis-, myapp- or logsash-* etc ?
Thanks a lot for your support. Now its working fine. The problem was before that i had no new data in log files for today, thats why it has not indexed any thing. i have added following settings in input file tag.
sincedb_path => "NUL"
ignore_older => 0
Now it has indexed me everything. in my each config i have following type of output.
This cause me same logs once in myapp-* and logstash-* index. to save the storage i would like to habe only myapp-, iis- logs and would not like to have logstash-*
is this my else condition which cause logstash-* index ?
another question is, how can i tell ES to keep index of only last 30 days? and delete everything which is older than 30 days. I know you can fire a query to ES but is there any Setting which i can set once and it does the magic?
In that case yes, the else block is the problem. All messages will reach it.
another question is, how can i tell ES to keep index of only last 30 days? and delete everything which is older than 30 days. I know you can fire a query to ES but is there any Setting which i can set once and it does the magic?
There's no setting but the Curator program can do this for you.
Now i have removed the else condition from both conf files. i have only 2 conf files now. still it create logstash-* index.
Another issue what i have is the date filter. i want to use log_timestamp into kibana under time field which is currently @timestamp field which may be using indexing time.
Thanks for your reply. now i am not extracting unwanted fields.
here is my logline
Verbose;OLESRV741;28.12.2016 09:37:32,820;Area=;SubArea=;SessionId=;StepId;User=;Message=QuoteRepository: START SaveQuote with ID: 4248 and ApplicantID: 7737
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.