[logstash.filters.jdbc.lookup] Parameter field not found in event {:lookup_id=>

Hi, I want to enrich some log data with a table in oracle.

[2019-02-19T14:06:29,775][WARN ][logstash.filters.jdbc.lookup] Parameter field not found in event {:lookup_id=>"local_integration_descr", :invalid_parameters=>["integrationnorm"]}

When the integrationnorm exist, the query could not be excuted and no values are returned from lookup table.

This is my pipline configuration file:
input {
beats {
port => 5044
host => "0.0.0.0"
}
}

filter {

    grok {
        match => { "message" => "(%{TIMESTAMP_ISO8601:ts}): Information: I-UNK-000-000: (?<tipus_descart>.*). Integration:(?<integrationnorm>.*); Manager:(?<manager>.*); Agent:(?<agent>.*);AlertGroup:(?<alertgroup>.*); AlertKey:(?<alertkey>.*); EMS:(?<ems>.*); Node:(?<node>.*); AlarmedElement:(?<alarmedelement>.*); Summary:(?<summary>.*)"}
    }

    date {
        match => [ "ts", "ISO8601" ]
        timezone => "Europe/Andorra" 
        remove_field => ["ts"] 
    }

      jdbc_static {
        loaders => [ 
          {
            id => "remote_integration_descr"
            query => "select INTEGRATIONNORM as integrationid, INTEGRATION_NAME as integrationname from NOPUTILITY_INTEGRATION ORDER BY INTEGRATIONNORM"
            local_table => "integration_descr"
          }
        ]
        local_db_objects => [ 
          {
            name => "integration_descr"
            index_columns => ["integrationid"]
            columns => [
              ["integrationid", "varchar(6)"],
              ["integrationname", "varchar(64)"]
            ]
          }
        ]
        local_lookups => [ 
          {
            id => "local_integration_descr"
            query => "select integrationname from integration_descr WHERE integrationid = :id"
            parameters => {id => "[integrationnorm]"}
            target => "integrationlookup"
          }
        ]
        add_field => { integration => "%{[integrationlookup][0][integrationname]}" }
        remove_field => ["integrationlookup"]

        jdbc_user => "ow_impact"
        jdbc_password => "qi0hmh1p"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_driver_library => "/opt/logstash/lib/ojdbc6.jar"
        jdbc_connection_string => "jdbc:oracle:thin://@(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oracle-srv-netcool-pd.oracle.sta)(PORT = 1521)))(CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = SRV_NETCOOL_PD)))"

      }

}

output {
elasticsearch {
hosts => "192.168.80.91:9200"
index => "aes_netcool-probe-discard-%{+YYYY-MM}"
}
}

Thanks

I detected two mistakes:
I introduce an "if statment" with a regular expression to filter lines that are not a discart in my example and then reduce errors. And the segond is that I change the type of the field and know it works fine.
Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.