NMAP codec - HTTP gives a 500 status when using the codec on an http input but 200 status without the codec

I've been following this tutorial:

Does anyone know why I might be seeing this in my logs and the codec not working? (getting a 500 error on the http connection, but a 200 without the codec). I believe the following line is the issue but I don't know how to make sure I'm using the latest codec.

[2017-03-24T16:20:17,614][INFO ][logstash.codecs.nmap     ] Using version 0.1.x codec plugin 'nmap'. This plugin isn't well supported by the community and likely has no maintainer.

Logstash will start up without issue and be ready for input on the port configured (8000) using the nmap codec. There's a warning in the debug output though saying "INFO logstash.codecs.nmap - Using version 0.1.x codec plugin 'nmap'. This plugin isn't well supported by the community and likely has no maintainer." I don't know if that is applicable here though.

The error I see in the debug output is this:

I also saw this error in the logs before using the debugging but here it is from the debug output:

17:52:14.411 [Ruby-0-Thread-20: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:61] ERROR logstash.inputs.http - unable to process event {"request_method"=>"POST", "request_path"=>"/", "request_uri"=>"/", "http_version"=>"HTTP/1.1", "http_host"=>"localhost:8000", "http_user_agent"=>"curl/7.47.0", "http_accept"=>"*/*", "http_x_nmap_target"=>"example.net", "content_length"=>"19525", "content_type"=>"application/x-www-form-urlencoded", "http_expect"=>"100-continue"}. exception => java.lang.ArrayIndexOutOfBoundsException: -1

As a result of the error above - I've stripped out much of the configuration for filters and the elasticsearch output for testing. Removing those parts makes no difference so I'm 99% sure the issue appears to be related directly to the 'codec => nmap' because if I use something like 'codec => plain' I get no error but instead I get the expected (although unusable) output.

What I have configured is in /etc/logstash/conf.d/12-input-nmap.conf:

input {
  http {
    port => 8000
    codec => nmap
    tags => [nmap]
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

I run the configuration and send the NMAP xml output like this:

# nmap -A 192.168.1.0/24 -oX - | curl -v -H "x-nmap-target: local-subnet" http://192.168.10.20:8000 -d @-
* Rebuilt URL to: http://192.168.10.20:8000
* Hostname was NOT found in DNS cache
*   Trying 192.168.10.20...
* Connected to 192.168.10.20 (192.168.10.20) port 8000 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 192.168.10.20:8000
> Accept: */*
> x-nmap-target: local-subnet
> Content-Length: 19525
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
* Done waiting for 100-continue
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain
< Content-Length: 14
* HTTP error before end of send, stop sending
< 
* Closing connection 0

It seems the codec just doesn't want to hear what curl has to say.

Here is the debug output in it's entirety in case there is any applicable/interesting information:

http://pastebin.com/5RZqLxdT

OTHER INFO:

# dpkg -l | grep logstash | awk '{ print $2"\t"$3 }'
logstash	1:5.2.2-1

# java -version
java version "1.8.0_121"
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)

# ruby -v
ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-linux]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

One possible cause here is using -d instead of --data-binary. I actually updated the blog post to use --data-binary a couple weeks ago, it originally used -d. Can you make that switch and see if it fixes anything?