shine  
                (Shine)
               
                 
              
                  
                    May 16, 2018,  2:53pm
                   
                   
              1 
               
             
            
              My logs are not being processed by the correct conf file.
Customer B logs are being processed by "cust_log_a.conf" instead of "cust_log_b.conf"
pipeline.yml  
pipeline.id: customer_logs_a 
path.config: "/etc/logstash/conf.d/cust_log_a.conf" 
pipeline.id: customer_logs_b 
path.config: "/etc/logstash/conf.d/cust_log_b.conf"
cust_log_a.conf 
input {
   beats {
                 port => 5044
                 type => "log"
             }
}
filter
{
   mutate { gsub => [ "message", "[\n]", "" ]
   }
          if [fields][customer_name] == 'cust_apple_log'
         {
              grok 
                      {
                       ......
                      }
        }		
}
output {
     elasticsearch {
          hosts => "10.222.3.44:9200"
          manage_template => false
         index => "cust_apple"
}
stdout {}
}
 
cust_log_b.conf 
input {
   beats {
                 port => 5045
                 type => "log"
             }
}
filter
{
   mutate { gsub => [ "message", "[\n]", "" ]
   }
          if [fields][customer_name] == 'cust_baseball_log'
         {
              grok 
                      {
                       ......
                      }
        }		
}
output {
     elasticsearch {
          hosts => "10.222.3.44:9200"
          manage_template => false
         index => "cust_baseball"
}
stdout {}
}
 
filebeats.yml 
type: log
     paths:
     /usr/local/glassfish4/domains/customer_a/logs/events.log
     fields:
         customer_name: cust_apple_log
type: log
     paths:
     /usr/local/glassfish4/domains/customer_B/logs/events.log
     fields:
         customer_name: cust_baseball_log
output.logstash:
hosts: ["10.222.3.44:5044", "10.222.3.44:5045"] 
             
            
               
               
               
            
            
           
          
            
            
              
If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). See here .
             
            
               
               
               
            
            
           
          
            
              
                shine  
                (Shine)
               
              
                  
                    May 16, 2018,  5:30pm
                   
                   
              3 
               
             
            
              Thanks Badger,
What do you recommend I do to process multiple logs with different patterns?
We have 12 server(Ubuntu 14.04) with 6 customer websites hosted on each one. 
We need to track 3 custom logs for each customer's website. 
12 servers 
-- 6 customers/server 
--- 3 logs_files/customer
Should I create one xxx.conf file with 216 if..else statements?
if ['cust_01_log_apple','cust_02_log_apple', ... 'cust_216_log_apple'] in [fields][customer_name]
{
	grok 
	{
  	  .... pattern 1
	  add_field => { "cust_log_type" == "apple" }
	}
} else {
if ['cust_01_log_baseball','cust_02_log_baseball', ... 'cust_216_log_baseball'] in [fields][customer_name]
	{
		grok 
		{
		  .... pattern 2
		  add_field => { "cust_log_type" == "baseball" }
		}
	}
}
} else {
if ['cust_01_log_football','cust_02_log_football', ... 'cust_216_log_football'] in [fields][customer_name]
	{
		grok 
		{
		  .... pattern 3
		  add_field => { "cust_log_type" == "football" }
		}
	}
}
output {
   elasticsearch {
   hosts => "11.222.3.44:9200"
   manage_template => false
	index => "%{[fields][customer_name]}_{cust_log_type}"
  }
  stdout {}
} 
             
            
               
               
               
            
            
           
          
            
            
              Why would you need to have one conditional per server ? If you have 6 customers and they all have 3 unique log file formats you'll probably need 18 conditionals, unless you can use a single grok filter that lists 18 expressions (which Logstash will try in order and bail out as soon as there's a match).
             
            
               
               
               
            
            
           
          
            
            
              
I would configure filebeat with 18 different prospectors that tag the events with the customer id and the log format. You have 3 different log formats, not 18, right? Then in logstash execute the groks conditionally based on the log format.
             
            
               
               
               
            
            
           
          
            
              
                shine  
                (Shine)
               
              
                  
                    May 16, 2018,  6:34pm
                   
                   
              6 
               
             
            
              Hi Badger,
12 servers x 6 customer = 72 customers  
Each customer has an apple.log + baseball.log + football.log
Is this what you mean?
if ['cust_1_apple_log','cust_2_apple_log', ... 'cust_72_apple_log'] in [fields][customer_name]
	{
		grok 
		{
		  .... pattern 1
		  add_field => { "cust_log_type" == "apple" }
		}
	}
}else
if ['cust_1_baseball_log','cust_2_baseball_log', ... 'cust_72_baseball_log'] in [fields][customer_name]
	{
		grok 
		{
		  .... pattern 2
		  add_field => { "cust_log_type" == "baseball" }
		}
	}
}
}else
if ['cust_1_football_log','cust_2_football_log', ... 'cust_72_football_log'] in [fields][customer_name]
	{
		grok 
		{
		  .... pattern 3
		  add_field => { "cust_log_type" == "football" }
		}
	}
}
}
 
filebeats.yml 
prospectors:
##################### Customer 1 ###############################
type: log
     paths:
     /usr/local/glassfish4/domains/customer_A/logs/apple.log
     fields:
         customer_name: cust_1_apple_log
type: log
     paths:
     /usr/local/glassfish4/domains/customer_A/logs/baseball.log
     fields:
         customer_name: cust_1_baseball_log
type: log
     paths:
     /usr/local/glassfish4/domains/customer_A/logs/football.log
     fields:
         customer_name: cust_1_football_log
		 
##################### Customer 2 ###############################
type: log
     paths:
     /usr/local/glassfish4/domains/customer_2/logs/apple.log
     fields:
         customer_name: cust_2_apple_log
type: log
     paths:
     /usr/local/glassfish4/domains/customer_2/logs/baseball.log
     fields:
         customer_name: cust_2_baseball_log
type: log
     paths:
     /usr/local/glassfish4/domains/customer_2/logs/football.log
     fields:
         customer_name: cust_2_football_log
		 
##################### Customer 3,4,5... ############### 
             
            
               
               
               
            
            
           
          
            
              
                shine  
                (Shine)
               
              
                  
                    May 16, 2018,  6:59pm
                   
                   
              7 
               
             
            
              Hi MagnusBaeck,
My mistake on 216 customers  
(12 servers) x (6 customer/per server) = 72 customers 
Only 3 grok patterns needed
I made this change but I noticed apple.log will match correctly but baseball.log gets a 
tags: beats_input_codec_plain_applied, _grokparsefailure
But when I run one baseball.log entry and filter in https://grokdebug.herokuapp.com/  it filters correctly.
filter
{
	grok 
		{
                        #### apple.log Grok
			match => {"message" => "\[%{TIMESTAMP_ISO8601:timestamp_utc_jvm}\] (\[(%{WORD:server_name} %{NUMBER:server_ver})\]) \[%{WORD:severity}\] \[(?:%{DATA:glassfish_code}|)\] \[(?:%{JAVACLASS:java_pkg}|)\] \[(?<Thread_Name>[^\]]+)\] \[(?<timeMillis>[^\]]+)\] \[(?<levelValue>[^\]]+)\] (?<StackTrace>(?m:.*))"}
                        #### baseball.log Grok
			match => {"message" => "\[%{TIMESTAMP_ISO8601:timestamp_utc_jvm}\] %{WORD:severity} %{JAVACLASS:java_pkg} \[(?<Thread_Name>[^\]]+)\] (?<log_msg>(?m:.*))"}
		}
} 
             
            
               
               
               
            
            
           
          
            
            
              I was thinking of something like:
type: log
   paths:
   /usr/local/glassfish4/domains/customer_A/logs/apple.log
   fields:
       customer_name: cust_1
       filetype: apple_log 
             
            
               
               
               
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    June 13, 2018,  7:19pm
                   
                   
              9 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.