Issue with ports

Hello everyone!

I have elastic stack 6.7 on one node
I am trying to parse logs from cisco devices. I am using below configuration logstash stops listen on all ports (for winlogbeat and filebeat).
P.S I have installed Grok.
Please help.

INPUT - Logstash listens on port 8514 for these logs.

input {
syslog {
port => "8514"
type => "syslog"
}
}

FILTER - Try to parse the cisco log format

Configuration:

clock timezone Europe +1

no clock summer-time

ntp server 0.0.0.0 prefer

ntp server 129.6.15.28

ntp server 131.107.13.100

service timestamps log datetime msec show-timezone

service timestamps debug datetime msec show-timezone

logging source-interface Loopback0

! Two logging servers for redundancy

logging host 0.0.0.0 transport tcp port 8514

logging host 0.0.0.0 transport tcp port 8514

logging trap 6

filter {

NOTE: The frontend logstash servers set the type of incoming messages.

if [type] == "syslog" and [host_group] == "Netzwerk" {
# Parse the log entry into sections. Cisco doesn't use a consistent log format, unfortunately.
grok {
patterns_dir => "/var/lib/neteye/logstash/etc/pattern.d"
match => [
# IOS
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?:frowning: %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?:frowning: %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",

    # Nexus
    "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
    "message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",

# WLC
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{SYSLOGHOST:wlc_host}: %{DATA:wlc_action}: %{CISCOTIMESTAMP:log_date}: %{DATA:wlc_mnemonic}: %{DATA:wlc_mnemonic_message} %{GREEDYDATA:message}"
  ]

  overwrite => [ "message" ]

  add_tag => [ "cisco" ]
}

}

If we made it here, the grok was sucessful

if "cisco" in [tags] {
date {
match => [
"log_date",

    # IOS
    "MMM dd HH:mm:ss.SSS ZZZ",
    "MMM dd HH:mm:ss ZZZ",
    "MMM dd HH:mm:ss.SSS",
     
    # Nexus
    "YYYY MMM dd HH:mm:ss.SSS ZZZ",
    "YYYY MMM dd HH:mm:ss ZZZ",
    "YYYY MMM dd HH:mm:ss.SSS",
     
    # Hail marry
    "ISO8601"
  ]
}

# Add the log level's name instead of just a number.
mutate {
  gsub => [
    "severity_level", "0", "0 - Emergency",
    "severity_level", "1", "1 - Alert",
    "severity_level", "2", "2 - Critical",
    "severity_level", "3", "3 - Error",
    "severity_level", "4", "4 - Warning",
    "severity_level", "5", "5 - Notification",
    "severity_level", "6", "6 - Informational"
  ]
}

} # if
} # filter

output {

Something went wrong with the grok parsing, don't discard the messages though

if "_grokparsefailure" in [tags] {
file {
path => "/tmp/fail-%{type}-%{+YYYY.MM.dd}.log"
}
}

The message was parsed correctly, and should be sent to elasicsearch.

if "cisco" in [tags] {
#file {
# path => "/tmp/%{type}-%{+YYYY.MM.dd}.log"
#}

elasticsearch {
  hosts           => "localhost:9200"
  manage_template => false
  index           => "network-%{+YYYY.MM.dd}"
  document_type   => "%{type}"
  document_id     => "%{fingerprint}"
}

}
}

Are there any errors in the logstash logs?

Last 15 lines (log is huge. there is limit in forum to 7000 characters)

[2019-04-12T01:38:41,478][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-04-12T01:38:41,492][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2019-04-12T01:38:41,525][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-04-12T01:38:41,543][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-12T01:38:41,544][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-12T01:38:41,565][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-04-12T01:38:41,572][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
[2019-04-12T01:38:41,577][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-04-12T01:38:41,582][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-12T01:38:41,582][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-12T01:38:41,602][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-04-12T01:38:41,860][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#LogStash::FilterDelegator:0x43e9cb83", :error=>"pattern %{CISCOTIMESTAMPTZ:log_date} not defined", :thread=>"#<Thread:0x6a734c16 run>"}
[2019-04-12T01:38:41,865][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{CISCOTIMESTAMPTZ:log_date} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in block in compile'", "org/jruby/RubyKernel.java:1411:inloop'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in compile'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:281:inblock in register'", "org/jruby/RubyArray.java:1792:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:275:inblock in register'", "org/jruby/RubyHash.java:1419:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.0.4/lib/logstash/filters/grok.rb:270:inregister'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:259:inregister_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in block in register_plugins'", "org/jruby/RubyArray.java:1792:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:612:inmaybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:280:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:217:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:176:in `block in start'"], :thread=>"#<Thread:0x6a734c16 run>"}
[2019-04-12T01:38:41,895][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}
[2019-04-12T01:38:42,395][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

OK, so you are referring to a pattern that does not exist. You need to fix that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.