Bug with logstash-filter-elasticsearch

I have tried example straight from documentation and it seems to fail as elasticsearch expects " around field names.
elasticsearch {
hosts => ["localhost"]
query => "statuscode:200"
fields => ["commonid", "commonid"]
}
basically implementing a lookup table

logstash has this error
:message=>"Failed to query elasticsearch for previous event", :query=>"statuscode:200"
while elasticsearch has this
RemoteTransportException[[Jude the Entropic Man][172.17.0.2:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"size":1,"query":{"query_string":{"query":"statuscode:200","lowercase_expanded_terms":true,"analyze_wildcard":false}},"sort":[{"@timestamp":{"order":"desc"}}]}]]; nested: SearchParseException[No mapping found for [@timestamp] in order to sort on];

when i try to single qute the expression and add double quotes around the fields it still fails :frowning:

Has nothing to do with double quotes.

Can you post an example? Make sure you format your post correctly with the code formatting button.

thanks so the issue is .kibana index which does not have @timestamp. After I added the querry works without fields parameter but when I added to get the output back into one of the fields of the new event

if [type] == "cache" {
elasticsearch {
hosts => ["http://localhost:9200"]
query => "type:cache"
fields => ["sessionid", "message"]
}
}

I get the following in the logstash log. Not sure what I am doing wrong as I simplified to makes sure that sessionid exists on the retrieval an message field exists in the new event...

{:timestamp=>"2016-05-17T16:02:05.382000+0000", :message=>"Failed to query elasticsearch for previous event", :query=>"type:cache", :event=>#<LogStash::Event:0x2e30991a @metadata={}, @accessors=#<LogStash::Util::Accessors:0x789391a6 @store={"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, .... "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, @lut={"type"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "type"], "logregion"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "...., "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "logregion"], "loglevel"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "loglevel"], "client_requestid"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "client_requestid"], "message"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "message"], "[type]"=>[{"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, "type"]}>, @data={"@version"=>"1", "@timestamp"=>"2016-05-17T16:01:59.613Z", "type"=>"cache", "beat"=>{"hostname"=>"2794669d9139", "name"=>"2794669d9139"}, "input_type"=>"log", "count"=>1, "fields"=>nil, "source"=>"/dockershare/serviceB.log", "offset"=>36295, "host"=>"2794669d9139", "tags"=>["beats_input_codec_plain_applied"], "timestamp"=>"2016-05-03T11:53:15.761Z", "sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519b1", "client_sessionid"=>"fe666e8e-a44e-425b-9dbb-6bd0815519a1"}, @metadata_accessors=#<LogStash::Util::Accessors:0x2e9ad16d @store={}, @lut={}>, @cancelled=false>, :error=>#<NoMethodError: undefined method `start_with?' for nil:NilClass>, :level=>:warn}
~
~
~

Check your ES log, there may be more there.

did no errors. I am questioning part when I map fields and it returns fields => nil. I tried running the same query on elasticsearch manually and get expected results. wonder if fields parameter is a real issue.

I am just implementing "cache" for some type of log events. my PoC depends on looking up sessionids in es cache for some type of events and copying the value to the new event.

Ok so it works with logstash 2.2 but not 2.3. Thus I have to downgrade. Not sure if this community plugin will be actively supported :frowning: to fix this issue