How to configure ELK (Elasticsearch, Logstash,Kibana) for different application log files and display each application separately in Kibana?

Hallo Everybody

Since last 2 days i am busy with ELK. I have successfully installed Elasticsearch 5.1.1, Logstash 5.1.1 and Kibana 5.1.1 on my local windows VM and set up logstash configuration as it can parse IIS Log files through grok filter. All three are running as windows service and working fine. i can see the logs in kibana default *logstash index pattern. so far so good.

What i want to achieve ?
I have a FTP Logs downloading job which download asp.net applications log to my VM. I am saving logs per application is its own folder named as application.

Example:
D:\Logs\MyWeb1
D:\Logs\MyWeb2
D:\Logs\MyBackend1
D:\Logs\MyBackend2

Now i want to configure Logstash as it can show me on first each application name for selection (with checkbox). when a user want to show logs for MyWeb1 and MyBackend1 than he should check both application checkboxes and logstash show logs of both applications together, so that he can debug/search information in specific application logs.

How can we build up this use case in Logstash?

I have not installed any beats or file receiver yet? do i need to install any beat or plugin?

I ll be thankful for any quick response in this regard

best regards

Use Logstash filters to create fields for the application name. You could e.g. use a grok filter to extract the application name from the filename, or if the same information is available inside the log file you can pick it up from there.

Once the application name is in a field you can filter out what you want in Kibana.

If you want to build your own GUI with checkboxes that is of course also doable.

Hi @magnusbaeck

thanks for quick feedback. I have created a new logstash conf file for one of my application logs. i have added new field of ApplicationName and i can see the logs along with ApplicationName.

in my logstash installation directory i have created a conf folder where i have 3 conf files now.
logstash.conf
iis.conf
myapp.conf

my logstash windows service using argument to use this conf folder, so all conf files in it will be used from logstash. Now how can i create a named index mapping pattern "myapp" in elasticsearch so that i can later create the same index pattern "myapp-*" in kibana to display the log entries of specific application?

I want to habe a index pattern in Kibana for each configuration. is it possible?

Thanks in advance
regards

Now how can i create a named index mapping pattern "myapp" in elasticsearch so that i can later create the same index pattern "myapp-*" in kibana to display the log entries of specific application?

Just configure your elasticsearch output(s) in Logstash to send to whatever index you want. The indexes will be created as needed. Once they exist you can set up index patterns for them in Kibana.

Hi @magnusbaeck

Thanks again for quick response. I have tried to create a elasticsearch ouput with index "myapp" but after that nothing working at all. even default logstash-* index is also not working. currently i have two issues. one is elasticsearch does not create indexes and second is my config for multiline exception containing logs.

here is my default configuration. logstash.conf

this should work with logstash-* index but unfortunatly elasticsearch does not create any index with logstash-* name.

input { stdin { } }
filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
	
  }
}
output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

here is my second configuration file.

input {
 stdin { }
 file {
   	type => "ApplicationEntLib"
	path => "D:/logs/main/Api.Ohcp/myapp/*ApplicationEntLib*.log"	
    start_position => "beginning" 
   }
}

filter {  
   multiline {
	  pattern => "^\;"
	  negate => true
	  what => "previous"
	}
   mutate {
	 gsub => ['message', "\n", " "]
     gsub => ['message', "\t", " "]    
    remove_field => [ "log_timestamp"]	
  } 
  grok {
    # check that fields match your IIS log settings    
	#patterns_dir => "../custompatterns
    match => ["message", "(?m)%{WORD:LOGLEVEL}\;%{WORD:Machine}\;%{DATESTAMP:time}\;%{WORD:Area}\=;%{WORD:SubArea}\=;%{WORD:SessionId}\=;%{WORD:StepId}\;%{WORD:User}\=;%{GREEDYDATA:message}\="]	
	add_field => { "ApplicationName" => "myapp" } 
	remove_tag => ["_grokparsefailure"]
	remove_field => ["clientHostname"]
	remove_field => ["_type"]
	remove_field => ["_score"]
  }
  
  #Set the Event Timesteamp from the log
  date {
    match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
	timezone => "Etc/UTC"
  }	 
}

# See documentation for different protocols:
output {
  elasticsearch { 
  hosts => ["localhost:9200"] 
  index => "myapp-%{+YYYY.MM.dd}"}
  stdout { codec => rubydebug }
}

Here it must work with "myapp-*" index but it has some error on multiline filter property. i am using multiline for exception trace inside logfile. as exception is mostly consists of multiple line.
this is how my log looks like

Verbose;mycomputer;19.12.2016 15:11:34,967;Area=;SubArea=;SessionId=;StepId;User=;Message=QuoteRepository: START gettestmethod with policeNumber:asasasas
Error;mycomputer;19.12.2016 15:12:51,361;Area=;SubArea=;SessionId=;StepId;User=;Message=Services:Exception: System.Exception: testmethod service error(s): 1007
   asaksjasas ölakslaks asö. line 253
   bla bla bla :line 1261
   at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
   InnerException:
   asaksjasas ölakslaks asö. line 253
   bla bla bla :line 1261
   at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
   
   InnerException: asaksjasas ölakslaks asö. line 253
   bla bla bla :line 1261
   at bla.bla.bla(String is) in d:\bla\test.cs:line 1159
Verbose;mycomputer;19.12.2016 15:57:34,930;Area=;SubArea=;SessionId=;StepId;User=;Message=ApplicantManager: START gettestmethod with policeNumber: asasasas

these are 3 log statements where line 2 contains an exception which end up at just before Verbose; in next log Statement. all lines are starting from position zero. whats wrong with my configuration and multiline filter?

this should work with logstash-* index but unfortunatly elasticsearch does not create any index with logstash-* name.

Then I'd expect there to be something interesting in the logs. You may want to increase the logging verbosity with --verbose.

As for your second file, don't use the multiline filter. Use the multiline codec instead.

Are you starting Logstash with both these files, e.g. by putting them in /etc/logstash/conf.d and pointing Logstash to that directory?

pattern => "^\;"

None of your log's lines begin with a semicolon so this expression will never match. Perhaps something like ^%{WORD}; would be more appropriate?

Hi @magnusbaeck

Its true that content of logfiles are not that good. i am using three config files. i am using windows VM so its installed on C:\ELK\Logstash*, C:\ELK\Elasticsearch* C:\ELK\Kibana*.

Under Logstash i have a conf folder with three config files which my Logstash windows service is using. here is a step by step tutorial which i have followed yet.

http://robwillis.info/2016/05/installing-elasticsearch-logstash-and-kibana-elk-on-windows-server-2012-r2/

Here is my third conf.

input {
 stdin { }
 file {
   	type => "iis"
	path => "C:/inetpub/logs/LogFiles/W3SVC*/*.log"	
    start_position => "beginning" 
  }
}

filter {
 
  #ignore log comments
  if [message] =~ "^#" {
    drop {}
  }

  grok {
    # check that fields match your IIS log settings    
    match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:serviceName} %{WORD:serverName} %{IP:serverIP} %{WORD:method} %{URIPATH:uriStem} %{NOTSPACE:uriQuery} %{NUMBER:port} %{NOTSPACE:username} %{IPORHOST:clientIP} %{NOTSPACE:protocolVersion} %{NOTSPACE:userAgent} %{NOTSPACE:cookie} %{NOTSPACE:referer} %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken}"]
  }
  
  #Set the Event Timesteamp from the log
	date {
    match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
	  timezone => "Etc/UTC"
  }	
	
  ## If the log record has a value for 'bytesSent', then add a new field
  #   to the event that converts it to kilobytes
  #
  if [bytesSent] {
    ruby {
      code => "event['kilobytesSent'] = event['bytesSent'].to_i / 1024.0"
    }
  }


  ## Do the same conversion for the bytes received value
  #
  if [bytesReceived] {
    ruby {
      code => "event['kilobytesReceived'] = event['bytesReceived'].to_i / 1024.0"
    }
  }
 
 ## Perform some mutations on the records to prep them for Elastic
  #
  mutate {
    ## Convert some fields from strings to integers
    #
    convert => ["bytesSent", "integer"]
    convert => ["bytesReceived", "integer"]
    convert => ["timetaken", "integer"]

    ## Create a new field for the reverse DNS lookup below
    #
    add_field => { "clientHostname" => "%{clientIP}" }

    ## Finally remove the original log_timestamp field since the event will
    #   have the proper date on it
    #
    remove_field => [ "log_timestamp"]
  }
  
  useragent {
	source=> "useragent"
	prefix=> "browser"
  }
  
  ## Do a reverse lookup on the client IP to get their hostname.
  #
  dns {
    ## Now that we've copied the clientIP into a new field we can
    #   simply replace it here using a reverse lookup
    #
    action => "replace"
    reverse => ["clientHostname"]
  }
 
}

# See documentation for different protocols:
# http://logstash.net/docs/1.4.2/outputs/elasticsearch
output {
  elasticsearch { 
  hosts => ["localhost:9200"] 
  index => "iis-%{+YYYY.MM.dd}"}
  stdout { codec => rubydebug }
}

yesterday it was working with these configuration but under logstash-* index. but later on when i have deleted all existing elasticsearch indexes and than restarted my logstash and elasticsearch services, but it has not created any indexes. only a .Kibana index is available in elasticsearch index folder.

It's important to understand that Logstash has a single event pipeline. Feel free to split your configuration into multiple files, but they effectively work as is they were in a single file. In other words, the events from all events will be sent to all filters and all outputs. If you want to restrict this you need to wrap filters and outputs in conditionals, e.g. like this:

output {
  if [type] == "iis" {
    elasticsearch { 
    hosts => ["localhost:9200"] 
    index => "iis-%{+YYYY.MM.dd}"}
    stdout { codec => rubydebug }
  }
}
1 Like

It means i can use different config files but inside output i must use condition for each config file.
In this way Logstash can create multiple indexes like iis-, myapp- or logsash-* etc ?

It means i can use different config files but inside output i must use condition for each config file.

Yes.

In this way Logstash can create multiple indexes like iis-, myapp- or logsash-* etc ?

Yes.

Hi @magnusbaeck

Thanks a lot for your support. Now its working fine. The problem was before that i had no new data in log files for today, thats why it has not indexed any thing. i have added following settings in input file tag.

sincedb_path => "NUL"
ignore_older => 0 

Now it has indexed me everything. in my each config i have following type of output.

output {
if [type] == "MyAppLog" {
  elasticsearch { 
  hosts => ["localhost:9200"] 
  index => "myapp-%{+YYYY.MM.dd}"}  
  }
  else 
  {
   elasticsearch { hosts => ["localhost:9200"] }
   stdout { codec => rubydebug }
  }
}

This cause me same logs once in myapp-* and logstash-* index. to save the storage i would like to habe only myapp-, iis- logs and would not like to have logstash-*

is this my else condition which cause logstash-* index ?

another question is, how can i tell ES to keep index of only last 30 days? and delete everything which is older than 30 days. I know you can fire a query to ES but is there any Setting which i can set once and it does the magic?

Thanks and kind regards

is this my else condition which cause logstash-* index ?

It sounds like you effectively have this:

output {
  if [type] == "OnsuranceAppLog" {
    elasticsearch { 
      hosts => ["localhost:9200"] 
      index => "onsurance-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["localhost:9200"]
    }
    stdout { codec => rubydebug }
  }
  if [type] == "iis" {
    elasticsearch { 
      hosts => ["localhost:9200"] 
      index => "iis-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["localhost:9200"]
    }
    stdout { codec => rubydebug }
  }
}

In that case yes, the else block is the problem. All messages will reach it.

another question is, how can i tell ES to keep index of only last 30 days? and delete everything which is older than 30 days. I know you can fire a query to ES but is there any Setting which i can set once and it does the magic?

There's no setting but the Curator program can do this for you.

Now i have removed the else condition from both conf files. i have only 2 conf files now. still it create logstash-* index.

Another issue what i have is the date filter. i want to use log_timestamp into kibana under time field which is currently @timestamp field which may be using indexing time.

Here is my filter:

filter {  
	mutate {
	 gsub => ['message', "\n", " "]
     gsub => ['message', "\t", " "]        
  }   
	grok {
    match => ["message", "(?m)%{WORD:LOGLEVEL}\;%{WORD:Machine}\;%{DATESTAMP:log_timestamp}\;%{WORD:Area}\=;%{WORD:SubArea}\=;%{WORD:SessionId}\=;%{WORD:StepId}\;%{WORD:User}\=;%{GREEDYDATA:message}\="]	
	add_field => { "ApplicationName" => "Onsurance" } 	
	remove_tag => ["_grokparsefailure"]
	remove_field => ["clientHostname"]
	remove_field => ["LogMessage","Area", "SubArea", "SessionId", "StepId", "User", "_id", "_score", "clientHostname"]
remove_tag => ["_grokparsefailure"]
	}
  
  #Set the Event Timesteamp from the log
  date {
    match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
	target => "@timestamp"
	timezone => "Etc/UTC"	
  }	 
}

actually i am trying to overwrite @timestamp by log_timestamp, but its not working. whats wrong here ?

If i want to create a new Index in Kibana, it alway show me only @timestamp field. Why don't it show my all fields available in that index?

also kibana does not remove_field or tags which i have mentioned in my filter. it does not matter if its in mutate or in grok, both does not work.

is there any way to just skip words like Area, SubArea, SessionId etc be indexed or displayed in Kibana?

actually i am trying to overwrite @timestamp by log_timestamp, but its not working. whats wrong here ?

Your timestamp isn't in YYYY-MM-dd HH:mm:ss format.

is there any way to just skip words like Area, SubArea, SessionId etc be indexed or displayed in Kibana?

If you don't want those fields don't extract them with the grok filter.

Thanks for your reply. now i am not extracting unwanted fields.

here is my logline

Verbose;OLESRV741;28.12.2016 09:37:32,820;Area=;SubArea=;SessionId=;StepId;User=;Message=QuoteRepository: START SaveQuote with ID: 4248 and ApplicantID: 7737

here is my filter

grok {    
		match => ["message", "(?m)%{WORD:LOGLEVEL}\;%{WORD:Machine}\;%{DATESTAMP:log_timestamp}\;%{WORD:}\=;%{WORD:}\=;%{WORD:}\=;%{WORD:}\;%{WORD:}\=;%{GREEDYDATA:message}"]		
		overwrite => [ "message" ]
		add_field => { "ApplicationName" => "Onsurance" } 	
		remove_field => ["WORD"]	
	}

  date {
    match => [ "log_timestamp", "dd.MM.YYYY HH:mm:ss"]	
	target => "@timestamp"	
  }	 

It seems like it can not overwrite @timestamp or my date filter is not working at all.

but it still unable to overwrite @timestamp. actually i want to show log_timestamp in Kibana Time column, so that logs orderby log_timestamp.

Hi @magnusbaeck

i figured out the problem. Actually my date formate was incorrect .

it must be "dd.MM.yyyy HH:mm:ss,SSS" instead of "dd.MM.yyyy HH:mm:ss"

Now its working

Thanks for your support

best regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.