"Unable to fetch mapping. Do you have indices matching the pattern?"

logstash doesnt seem to forward logs to elastic correctly

slices:
yellow open .kibana RixJKa2bQ8iRBwDmHd18Yw 1 1 1 0 3.2kb 3.2kb

/etc/logstash/logstash.yml:
path.config: /etc/logstash/conf.d
...
path.logs: /var/log/logstash

/etc/logstash/conf.d/logstash.conf:
input {
udp {
port => 5544
type => "cisco-fw"
}
}

filter {
grok {
match => ["message", "%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}"]
}

    # Extract fields from the each of the detailed message types
    # The patterns provided below are included in core of LogStash 1.4.2.
    grok {
            match => [
                    "cisco_message", "%{CISCOFW106001}",
                    "cisco_message", "%{CISCOFW106006_106007_106010}",
                    "cisco_message", "%{CISCOFW106014}",
                    "cisco_message", "%{CISCOFW106015}",
                    "cisco_message", "%{CISCOFW106021}",
                    "cisco_message", "%{CISCOFW106023}",
                    "cisco_message", "%{CISCOFW106100}",
                    "cisco_message", "%{CISCOFW110002}",
                    "cisco_message", "%{CISCOFW302010}",
                    "cisco_message", "%{CISCOFW302013_302014_302015_302016}",
                    "cisco_message", "%{CISCOFW302020_302021}",
                    "cisco_message", "%{CISCOFW305011}",
                    "cisco_message", "%{CISCOFW313001_313004_313008}",
                    "cisco_message", "%{CISCOFW313005}",
                    "cisco_message", "%{CISCOFW402117}",
                    "cisco_message", "%{CISCOFW402119}",
                    "cisco_message", "%{CISCOFW419001}",
                    "cisco_message", "%{CISCOFW419002}",
                    "cisco_message", "%{CISCOFW500004}",
                    "cisco_message", "%{CISCOFW602303_602304}",
                    "cisco_message", "%{CISCOFW710001_710002_710003_710005_710006}",
                    "cisco_message", "%{CISCOFW713172}",
                    "cisco_message", "%{CISCOFW733100}"
            ]
    }

    # Parse the syslog severity and facility
    syslog_pri { }

    geoip {
            add_tag => [ "GeoIP" ]
            database => "/opt/logstash/databases/GeoLiteCity.dat"
            source => "src_ip"
    }

    if [geoip][city_name]      == "" { mutate { remove_field => "[geoip][city_name]" } }
    if [geoip][continent_code] == "" { mutate { remove_field => "[geoip][continent_code]" } }
    if [geoip][country_code2]  == "" { mutate { remove_field => "[geoip][country_code2]" } }
    if [geoip][country_code3]  == "" { mutate { remove_field => "[geoip][country_code3]" } }
    if [geoip][country_name]   == "" { mutate { remove_field => "[geoip][country_name]" } }
    if [geoip][latitude]       == "" { mutate { remove_field => "[geoip][latitude]" } }
    if [geoip][longitude]      == "" { mutate { remove_field => "[geoip][longitude]" } }
    if [geoip][postal_code]    == "" { mutate { remove_field => "[geoip][postal_code]" } }


    # Gets the source IP whois information from the GeoIPASNum.dat flat file database
    geoip {
            add_tag => [ "Whois" ]
            database => "/opt/logstash/databases/GeoIPASNum.dat"
            source => "src_ip"
    }

    # Parse the date
    date {
            match => ["timestamp",
                    "MMM dd HH:mm:ss",
                    "MMM  d HH:mm:ss",
                    "MMM dd yyyy HH:mm:ss",
                    "MMM  d yyyy HH:mm:ss"
            ]
    }

}

output {
stdout {
codec => json
}

    elasticsearch {
            host => [localhost:9200"]
            flush_size => 1
    }

}

nothing listens to 5544

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5601 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 ::1:9200 :::* LISTEN -
tcp6 0 0 127.0.0.1:9200 :::* LISTEN -
tcp6 0 0 ::1:9300 :::* LISTEN -
tcp6 0 0 127.0.0.1:9300 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
udp 0 0 10.193.37.64:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp6 0 0 fe80::21d:d8ff:fed0:123 :::* -
udp6 0 0 ::1:123 :::* -
udp6 0 0 :::123 :::* -

:/usr/share/logstash/bin$ sudo ./logstash -f /etc/logstash/conf.d/logstash.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[ERROR] 2017-12-19 10:57:21.859 [LogStash::Runner] geoip - Invalid setting for geoip filter plugin:

filter {
geoip {
# This setting must be a path
# File does not exist or cannot be opened /opt/logstash/databases/GeoLiteCity.dat
database => "/opt/logstash/databases/GeoLiteCity.dat"
...
}
}
[FATAL] 2017-12-19 10:57:21.870 [LogStash::Runner] runner - The given configuration is invalid. Reason: Something is wrong with your configuration.

As the error message says:

Path "/usr/share/logstash/data" must be a writable directory. It is not writable.

forgot to run as sudo.

correct output:

:/usr/share/logstash/bin$ sudo ./logstash -f /etc/logstash/conf.d/logstash.conf -t

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[ERROR] 2017-12-19 10:57:21.859 [LogStash::Runner] geoip - Invalid setting for geoip filter plugin:

filter {
geoip {
# This setting must be a path
# File does not exist or cannot be opened /opt/logstash/databases/GeoLiteCity.dat
database => "/opt/logstash/databases/GeoLiteCity.dat"
...
}
}
[FATAL] 2017-12-19 10:57:21.870 [LogStash::Runner] runner - The given configuration is invalid. Reason: Something is wrong with your configuration.

I'm mostly curious about the logstash.yml file ?

1:/usr/share/logstash/bin$ sudo ./logstash -f /etc/logstash/conf.d/logstash.conf --path.settings /etc/logstash/ -t
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

found alot of small errors

my logstash.conf has input output and filter in the same conf file is that ok ? I see lots of guides that has the separate conf files for input filter and output ?

logstash java process is consuming lots of cpu

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20261 root 20 0 3684040 232208 17752 S 200,7 1,4 0:13.53 java

This vm has 4 vCPU and Im seind syslog from a Cisco ASA to port 5544

Solved! Except all small typos I missed a fw opening :slight_smile:

my logstash.conf has input output and filter in the same conf file is that ok ? I see lots of guides that has the separate conf files for input filter and output ?

That's up to you. Logstash doesn't care.

logstash java process is consuming lots of cpu

But how much is it processing?

It was only up and running for a few minutes then it stopped working.

Im running a vm with 16vCPU and load is 4+ logstash java usually around 300-650 % cpu time.

I'd expect there to be clues in the Logstash logfile.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.