Log Monitoring

hi,

There is a requirement to configure the error code below and check if the emails are getting generated.
Error_code: AMQ9616
Q_manager: ESBPRT1

I have the logstash config file in which i have written the mail alert for particular text present in the message then automatically send an email with the message. Please find the configuration file (logstash.conf).

filter{
if[type] == "mq"
{

	# Grok array pattern to extract the value and stored in required fields.
	# Pattern to Get Custom Fields
	grok{			
			match=>{"message"=> "^?(?<event_timestamp>[0-9\/.\s:]+)"}
		}		
	grok{			
			match=>{"message"=> ".?(?<process>(Process))\(?(?<process_id>[0-9\.]+)"}
		}
	grok{			
			match=>{"message"=> ".?(?<userdata>(User))\(%{WORD:user}"}
		}
	grok{			
			match=>{"message"=> ".?(?<program>(Program))\(%{WORD:program_info}"}
		}
	grok{			
			match=>{"message"=> ".?(?<host>(Host))\(?(?<hostname>[0-9a-zA-Z\-\.]+)"}
		}
	grok{			
			match=>{"message"=> ".?(?<qmgr>(QMgr))\(%{WORD:q_manager}"}
		}
	grok{			
			match=>{"message"=> ".?(?<vrmf>(VRMF))\(?(?<vrmf_version>[0-9\.]+)"}
		}
	grok{			
			match=>{"message"=> ".?(?<actiondata>(ACTION:)) ?(?<action>[A-Z0-9a-z.\s~\n`!@#$%^&\[\]\*\(\)\?\=\+_\-\{\}|;':\/\/\"\<\,\>]+)\-"}
		}
	grok{			
			match=>{"message"=> ".?(?<error_code>AMQ[0-9]+)\: ?(?<error_description>[A-Z0-9a-z.\s~\n`!@#$%^&\[\]\*\(\)\?\=\+_\-\{\}|;':\/\/\"\<\,\>]+)\EXPLANATION:?(?<error_message>[A-Z0-9a-z.\s~\n\\`!@#$%^&\[\]\*\(\)\?\=\+_\-\{\}|;':\/\/\"\<\,\>]+)ACTION"}
		}
	
	#Convert string format into Date format
	date { 
	  match => ["event_timestamp", "MM/dd/yyyy HH:mm:ss " ] 
	  target => ["event_timestamp"]		  
	}

	# We need to generate a fingerprint as we dont want any duplicates inside our elasticsearch data.
    # One file must be processed only once no matter what. Its nothing but SHA hash of our document id.
    # This will generate a field "fingerprint", used as a document_id.	
	fingerprint {
					  source => ["event_timestamp","process_id","error_code","error_description", "action","user","offset"]
					  target => "fingerprint"
					  key => "78787878"
					  method => "SHA1"
					  concatenate_sources => true
				   }
	# cleanup
		mutate {	  
		  remove_field => ["process","userdata","program","host","vrmf","actiondata","qmgr"]
		}
 }

}

output{
if[type] == "mq"
{
elasticsearch {
# Index name is used to store the elasticsearch value and it will display the elasticsearch value in kibana using the index name
index => "logstash-dd.mq_log"
hosts => ["10.85.74.165:9200"]
document_id => "%{fingerprint}"
}
}

if[q_manager] == "ESBPRT1"
{
if[error_code] == "AMQ9616"
		{
		
		email{
				  to => "123@xxx.com"
				  from => "456@xxx.com"
				  subject => "MQ Alert - AMQ9616"
				  body => "Hello Team,\n\n Error code AMQ9616 occured @ %{event_timestamp} .Please check and take necessary action. \n \n Link to 123 DashBoard : http://123.com/goto/a52aab48ddfea4a8594ed8342f853f00 \n\n\n * This is an automated e-mail and any responses to this e-mail will not be monitored \n Thank You!"
				  port => 25	
				}
		}

      }
} 

I have configured the email for the above event,but there is lot of emails generated.

I need to have an alert generated every 15min or 30min that will generate a consolidated error count and error details.Is there any option for that.

You could write the alerts to a separate elasticsearch index then use an elasticsearch input with a schedule to see if there are recent alerts.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.