Multiple syslog input into logstash

Hello,

I wanted to parse logs of network devices into logstash. I have configured individual pipeline for cisco and paloalto. I have configured the input as syslog with different ports and output to different index names. But I see that paloalto logs are parsing with its configured index pattern and cisco index pattern also. I am confused why it is happening. Here are my two input and output of pipelines.

Cisco pipeline input:

input {
    syslog {
        port => "5014"
        type => "syslog"
        tags => [ "ios-parsed" ]
    }
}

Paloalto pipeline input:

input {
    syslog {
        timezone => "Asia/Dhaka"
        port => "5514"
        type => "syslog"
        tags => [ "PAN-OS_syslog" ]
    }
}

Cisco pipeline output:

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "cisco-logs-%{+YYY.MM.dd}"
  }
}

Paloalto pipeline output:

output {

    if "PAN-OS_traffic" in [tags] {
        elasticsearch {
            index => "panos-traffic"
            hosts => ["localhost:9200"]
        }
    }

    else if "PAN-OS_threat" in [tags] {
        elasticsearch {
            index => "panos-threat"
            hosts => ["localhost:9200"]
        }
    }
    else if "PAN-OS_Config" in [tags] {
        elasticsearch {
            index => "panos-config"
            hosts => ["localhost:9200"]
        }
    }

    else if "PAN-OS_System" in [tags] {
        elasticsearch {
            index => "panos-system"
            hosts => ["localhost:9200"]
        }
    }
}

Is there any mistake in inputs and outputs or in filter I am using. I have done filter for both of them with the tags in the input.

Are you using multiple pipelines in pipelines.yml?

How are you running Logstash? What does your pipelines.yml looks like.

If you do not configured multiple pipelines in pipelines.yml you may not have individual pipelines, you may have just one big pipeline merging all your configuration files.

Your cisco output does not have a conditional:

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "cisco-logs-%{+YYY.MM.dd}"
  }
}

So, everything that pass through your inputs and filters will be sent to this index, which seems to be your case.

Hello leandrojmp,

Thanks for your reply.

Sorry I have mistakenly used pipeline word. I was about to say configuration file in /etc/logstash/conf.d/ directory. Yes, I understand your logic of output as their is no condition. I have put a condition and found that Paloalto logs are not coming with cisco index now. If I want to get the cisco logs now do I need to create different pipeline for them. My pipeline.yml configurations is default.

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

Note: I can't see any cisco logs now after putting condition.

My configuration for cisco is below:

input {
  udp {
    port => 5014
    type => syslog
  }
}


filter {
    ##########################
    # General section
    ##########################
    grok {
        match => [
            "message", "<%{NONNEGINT:syslog_pri}>\d+: (%{SYSLOGHOST:device}: )?\*?\.?(\d+: )?%{CISCOTIMESTAMP:log_date}(\s+%{WORD})?: (%%{CISCO_REASON:vendor_facility}-%{INT}-%{CISCO_REASON:vendor_facility_process}: )?%{GREEDYDATA:log_message}"
        ]
        add_tag => ["ios_parsed"]
    }
    syslog_pri { }
    ##########################
    # Config Audit section
    ##########################
    if "ios_parsed" in [tags] and "PARSER" in [vendor_facility] {
        grok {
            match => {
                "log_message" => "User:%{USER:username}\s+From\s+%{IP:source_ip}\s+logged command:%{GREEDYDATA:command_used}"
            }
        }

        mutate {
            add_tag => ["cisco-traffic"]
        }
    }
    ############################
    # Interface status section
    ############################
    if "ios_parsed" in [tags] and "LINEPROTO" in [vendor_facility] and "UPDOWN" in [vendor_facility_process] {
        grok {
            match => {
                "log_message" => "User:%{USER:username}\s+From\s+%{IP:source_ip}\s+logged command:%{GREEDYDATA:command_used}"
            }
        }

        mutate {
            add_tag => ["cisco-traffic"]
        }
    }
    ###################################
    # OSPF Adjacency change section
    ###################################
    if "ios_parsed" in [tags] and "OSPF" in [vendor_facility] and "ADJCHG" in [vendor_facility_process] {
        grok {
            match => {
                "log_message" => "Process\s+%{NUMBER:ospf_instance},\s+Nbr\s+%{IP:ospf_neighbor}\s+on\s+%{DATA:ospf_interface}\s+from\s+%{DATA:before_ospf_state}\s+to\s+%{DATA:after_ospf_state},\s+%{GREEDYDATA:opsf_state_action_result}"
            }
        }

        mutate {
            add_tag => ["cisco-traffic"]
        }
    }

    ###################################
    # BGP Neighbor state
    ###################################
    if "ios_parsed" in [tags] and "BGP" in [vendor_facility] and "NOTIFICATION" in [vendor_facility_process] {
        grok {
            match => {
                "log_message" => "sent\s+to\s+neighbor\s+%{IP:bgp_neighbor}\s+passive\s+%{URIPARAM:retries}\s+%{GREEDYDATA:bgp_other_message}"
            }
        }

        mutate {
            add_tag => ["cisco-traffic"]
        }
    }

    ###################################
    # Configuration change by user
    ###################################
    if "ios_parsed" in [tags] and "SYS" in [vendor_facility] and "CONFIG_I" in [vendor_facility_process] {
        grok {
            match => {
                "log_message" => "Configured\s+from\s+console\s+by\s+%{USER:username}\s+on\s+vty0\s+%{IP:source_ip}"
            }
        }

        mutate {
            add_tag => ["cisco-traffic"]
        }
    }

}

output {
    if "cisco-traffic" in [tags]{
        elasticsearch {
        hosts => ["localhost:9200"]
        index => "cisco-logs-%{+YYY.MM.dd}"
        }
    }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.