Cannot Parse error: Lexical error

I just upgraded Kibana to version 6.5.3 but the client servers are still using version 6.3. My other services are parsing just fine and the logs for the the new servers i've recently added are in similar format to the rest of them.

I'm receiving the follow error on elasticsearch logs:

Caused by: org.apache.lucene.queryparser.classic.ParseException: Cannot parse '/opt/servicename/': Lexical error at line 1, column 18.  Encountered: <EOF> after : ""

Here's the first line of our java application logs:

2019-01-11 17:41:25.052  INFO 22972 --- [main] EnableEncryptablePropertiesConfiguration : Bootstraping jasypt-string-boot auto configuration in context:servicename:staging:443

My filebeat page is configured the same in all of my client servers;
filebeat.inputs:

- type: log

  enabled: true

  paths:
    - /opt/*/tomcat/logs/*

  exclude_files:
    - /opt/*/tomcat/logs/access*
    - /tomcat/logs/garbageCollectionLog

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  multiline.pattern: ^[0-9]{4}-[0-9]{2}-[0-9]{2}

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after

Here are the .conf files from the /etc/logstash/conf.d folder:

10-syslog-filter.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

20-filter-tomcat-logs.conf

filter {
    # access.log
    if ([source] =~ /.*\.txt$/) {
        grok {
            # Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot;
            # 10.0.0.7 - - [03/Sep/2017:10:58:19 +0000] "GET /pki/scep/pkiclient.exe?operation=GetCACaps&message= HTTP/1.1" 200 39
            match => [ "message" , "%{IPV4:clientIP} - %{NOTSPACE:user} \[%{DATA:timestamp}\] \"%{WORD:method} %{NOTSPACE:request} HTTP/1.1\" %{NUMBER:status} %{NUMBER:bytesSent}" ]
            remove_field => [ "message" ]
            add_field => { "[@metadata][cassandra_table]" => "tomcat_access" }
        }
        grok{
            match => [ "request", "/%{USERNAME:app}/" ]
            tag_on_failure => [ ]
        }
        date {
            match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
            remove_field => [ "timestamp" ]
        }
        ruby {
            code => "event.set('ts', event.get('@timestamp'))"
        }
        mutate {
            lowercase => [ "user" ]
            convert => [ "bytesSent", "integer", "duration", "float" ]
            update =>  { "host" => "%{[beat][hostname]}" }
            remove_field => [ "beat","type","geoip","input_type","tags" ]
        }
        if [user] == "-" {
            mutate {
                remove_field => [ "user" ]
            }
        }
        # drop unmatching message (like IPv6 requests)
        if [message] =~ /(.+)/  {
            drop { }
        }
    }
}

30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I'm not sure why the log isn't parsing properly since the server has similar log patterns as all the rest of our java applications. Do you guys have any ideas or suggestions for a solution?

The error message in the Elasticsearch log talks about the string /opt/servicename/. Can you find the log line(s) in your log files that have this string in it and post them here?

Also, you posted the filebeat.inputs section of your filebeat.yml configuration file. Is that the only section that you changed? Is the rest of the filebeat.yml file the default? If no, could you paste your entire filebeat.yml file after masking any sensitive information?

Hi @shaunak, thanks for the response i've posted the multiline options i've made made an update to. Also, please note that in logstash i have some .conf files in the /etc/logstash/conf.d folder, i'll add those as well!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.