Logstash cannot compile the .conf file

Hello,

so I have this weird situation that all my logstash config files just stopped working. I have noticed it started probably like a few weeks ago after something happened. Only one thing I recognize I have done that is in any way was related to logstah was that I updated the Elastic, Kibana and Filebeat but did not update the logstash. This error happens even after I updated the logstash to the 7.13.2 version.

The errors says that there is not expected character but there is...

[root]# /usr/share/logstash/bin/logstash -e /etc/logstash/conf.d/name1/name2.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-07-27 09:26:38.400 [main] runner - Starting Logstash {"logstash.version"=>"7.13.2", "jruby.version"=>"jruby 9.2.16.0 (2.5.7) 2021-03-03 f82228dc32 OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[WARN ] 2021-07-27 09:26:38.910 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-07-27 09:26:39.834 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[ERROR] 2021-07-27 09:26:40.246 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:187:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:389:in `block in converge_state'"]}
[INFO ] 2021-07-27 09:26:40.410 [LogStash::Runner] runner - Logstash shut down.

This is the config file that was working fine for months without any problems.

input {
    snmp {
        interval => 60
        hosts => [
            {host => "udp:10.10.10.10/161" community => "1234" retries => 5}
        ]
        tables => [
            {
               name => "interfaces"
               columns => [
                        "1.3.6.1.2.1.2.2.1.1",
                        "1.3.6.1.2.1.2.2.1.3",
                        "1.3.6.1.2.1.2.2.1.4",
                        "1.3.6.1.2.1.2.2.1.5",
                        "1.3.6.1.2.1.2.2.1.6",
                        "1.3.6.1.2.1.2.2.1.7",
                        "1.3.6.1.2.1.2.2.1.8",
                        "1.3.6.1.2.1.2.2.1.9",
                        "1.3.6.1.2.1.31.1.1.1.6",
                        "1.3.6.1.2.1.2.2.1.11",
                        "1.3.6.1.2.1.2.2.1.12",
                        "1.3.6.1.2.1.2.2.1.13",
                        "1.3.6.1.2.1.2.2.1.14",
                        "1.3.6.1.2.1.2.2.1.15",
                        "1.3.6.1.2.1.31.1.1.1.10",
                        "1.3.6.1.2.1.2.2.1.17",
                        "1.3.6.1.2.1.2.2.1.18",
                        "1.3.6.1.2.1.2.2.1.19",
                        "1.3.6.1.2.1.2.2.1.20",
                        "1.3.6.1.2.1.31.1.1.1.1"
                        ]
            }

        ]
        add_field => { "host" => "%{[@metadata][host_address]}"}
        tags => ["snmp", "interface"]
    }
}

filter{

        split {
                field => "interfaces"
        }

        mutate {
                remove_field => [ "[interfaces]" ]
                remove_field => [ "[@version]" ]
                # INTERFACE
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.ifMIB.ifMIBObjects.ifXTable.ifXEntry.ifHCOutOctets]" => "interface.ifOutOctets"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifType]" => "interface.ifType"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifSpeed]" => "interface.ifSpeed"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.ifMIB.ifMIBObjects.ifXTable.ifXEntry.ifName]" => "interface.ifName"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOutUcastPkts]" => "interface.ifOutUcastPkts"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInNUcastPkts]" => "interface.ifInNUcastPkts"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInUcastPkts]" => "interface.ifInUcastPkts"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOutNUcastPkts]" => "interface.ifOutNUcastPkts"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInDiscards]" => "interface.ifInDiscards"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOutErrors]" => "interface.ifOutErrors"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOperStatus]" => "interface.ifOperStatus"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex]" => "interface.ifIndex"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInErrors]" => "interface.ifInErrors"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifOutDiscards]" => "interface.ifOutDiscards"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.ifMIB.ifMIBObjects.ifXTable.ifXEntry.ifHCInOctets]" => "interface.ifInOctets"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifPhysAddress]" => "interface.ifPhysAddress"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifLastChange]" => "interface.ifLastChange"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifMtu]" => "interface.ifMtu"}
                rename => { "[interfaces][iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifAdminStatus]" => "interface.ifAdminStatus"}

                # HOST
                rename => { "host" => "host.ip"}
                add_field => { "ip.observer" => "%{host.ip}"}
                add_field => { "host.hostname" => "Hostname" }
        }
        # Adding status explanation
        if 1 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "up" }}
        } else if 2 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "down" }}
        } else if 3 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "testing" }}
        } else if 4 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "unknown" }}
        } else if 5 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "notPresent" }}
        } else if 6 == [interface.ifOperStatus] {
          mutate { add_field => {  "interface.ifOperStatusText" => "lowerLayerDown" }}
        }

        if 1 == [interface.ifAdminStatus] {
          mutate { add_field => {  "interface.ifAdminStatusText" => "up" }}
        } else if 2 == [interface.ifAdminStatus] {
          mutate { add_field => {  "interface.ifAdminStatusText" => "down" }}
        } else if 3 == [interface.ifAdminStatus] {
          mutate { add_field => {  "interface.ifAdminStatusText" => "testing" }}
        }

}

output {
# file {
#   path => "/etc/logstash/conf.d/tests/snmp.txt"
# }
# stdout { codec => rubydebug }
      elasticsearch {
        hosts => ["ip:9200"]
        index => "network-devices-%{+YYYY.MM.dd}"
        user => "${ES_LOG}"
        password => "${ES_PWD}"
        cacert => "/etc/ca/ca.crt"
      }

}

What is path.config set to? I think the most likely explanation is that it points to a directory and a new file was created in that directory.

Hello Badger,

I have checked the folder and there is no new files.

Also I test config with a command
/usr/share/logstash/bin/logstash -e path_to_file so I exclude any pipeline config configuration.

For every of my file that was working it suddenly stopped. Saying bellow message
:message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 1, column 1 (byte 1)"
even due it is not a true because there is "input" at the beginning of the file.

Logstash is installed on Centos 7.
file -i returns
text/plain; charset=us-ascii

What file encoding/charset does Logstash expect?

Please help me resolve this issue. I am really out of ideas at this point.

Any ideas?

-e is used to pass a config string to logstash on the command line. So it is trying to interpret "/etc/logstash/conf.d/name1/name2.conf", and it fails at column one:

 /

You would use -e like this:

/usr/share/logstash/bin/logstash -e 'input {stdin {}} output {stdout {}}'

You should be using -f

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/name1/name2.conf

I have no idea why but I had not checked the logs file for Logstash. This was an access right issue to certificates... Thank you very much for your input :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.