Sending ganglia metric to logstash and the displaying them on kibana

Hi All,

I was able to send some metrics of ganglia to logstash. Here is my input configuration
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
udp {
port => 8649
codec => json_lines
}
}
This configuration is working and I able to visaulized my ganglia metric the awesome kibana dashbroad. But, logstash is unable to understand the message of the metric. That is the message look like
"message" => "\u0000\u0000\u0000\x86\u0000\u0000\u0000\u0010ip-172-31-37-235\u0000\u0000\u0000\fload_fifteen\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0004%.2f=L\xCC\xCD\u0000\u0000\u0000\x84\u0000\u0000\u0000\u0010ip-172-31-37-235\u0000\u0000\u0000\theartbeat\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0002%u\u0000\u0000V\xAC\xDFF\u0000\u0000\u0000\x84\u0000\u0000\u0000\u0010ip-172-31-37-235\u0000\u0000\u0000\theartbeat\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0002%u\u0000\u0000V\xAC\xDFF\u0000\u0000\u0000\x86\u0000\u0000\u0000\u0010ip-172-31-37-235\u0000\u0000\u0000\bmem_free\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0004%.0fH\xEBK\u0000\u0000\u0000\u0000\x86\u0000\u0000\u0000\u0010ip-172-31-37-235\u0000\u0000\u0000"
I have attach a screenshot.
Now my question is how can I configure logstash to understand the messages sent by ganglia.
Thanks for the concern.

Have you tried using the Ganglia input plugin?

Hi @Christian-Dahlqvist

Thanks for the quick respond. Yes I have tried Ganglia input plugin. When I used ganglia input as follows, I have nothing been displayed by kibana.
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
ganglia {
port => 8649
codec => json_lines
}
}

When debugging input plugins and filters, it is generally recommended to send data to the stdout plugin with a ruby debug codec, as this allows you to quickly look at the events coming in and being processed. I would start trying to receive messages on UDP using the ganglia plugin and the default settings (no son_lines codec) and see what events look like. If you are not receiving anything, enable verbose mode to see if any errors are reported.

I have just used the ganglia plugin with the default settings. I had no events been displayed and when I enable verbose, I have this error.
{:timestamp=>"2016-01-31T20:44:47.899000+0000", :message=>"An error occurred. Closing connection", :client=>"41.205.24.27:34479", :exception=>#<NoMeth
odError: undefined method []' for nil:NilClass>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/event .rb:73:ininitialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json_lines-2.0.2/lib/logstash/codecs/json_lines.rb:52:in guard'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-json_lines-2.0.2/lib/logstash/codecs/json_lines.rb:38:indecode'", "/opt/logstash/vendor/
bundle/jruby/1.9/gems/logstash-codec-line-2.0.2/lib/logstash/codecs/line.rb:36:in decode'", "org/jruby/RubyArray.java:1613:ineach'", "/opt/logstash
/vendor/bundle/jruby/1.9/gems/logstash-codec-line-2.0.2/lib/logstash/codecs/line.rb:35:in decode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logst ash-codec-json_lines-2.0.2/lib/logstash/codecs/json_lines.rb:37:indecode'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-2.0.4/lib
/logstash/inputs/tcp.rb:149:in handle_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-2.0.4/lib/logstash/inputs/tcp.rb:140:i nserver_connection_thread'"], :level=>:error}

Thanks for the concern.

It seems to be complaining about the son_lines codec. Can you show the exact configuration you used when you got this error?

Start with a minimal configuration and build on that, e.g. something like this:

input {
  ganglia {
    port => 8649
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

Here is my input configuration
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
ganglia {
port => 8649
}
tcp {
port => 9000
codec => json_lines
}
}
and as output, I have something like this
output {
stdout {
codec => rubydebug
}
}

Note the configuration with tcp is because I was first all recieving some logs on the port 9000

I would recommend troubleshooting one input at a time. The error you saw appears to be caused by the tcp input and its son_lines codec, and not the ganglia plugin. Run the TCP plugin without the codec first to ensure that data is arriving in the correct son_lines format.

It works once I remove the tcp plugin.
Thanks very much @Christian_Dahlqvist

Hi @Christian_Dahlqist

Thanks again for the help.
I now want to send ganglia metrics to logstash without starting the gmetad.conf daemon.
I have given a try as follows.
In the gmond.conf
cluster {
name = "unspecified"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}

udp_send_channel {
#mcast_join = 239.2.11.71
host = address_of_logstash_server
port = 8649
ttl = 1
}

tcp_accept_channel {
host = address_of_logstash_server
port = 8649
}
I then started ganglia with the following command and stopped the gmetad.conf daemon with the following command.
sudo service ganglia-monitor restart && sudo service gmetad stop && sudo service apache2 restart
But unfortanately, I had nothing been displayed on logstash.
Where am I going wrong?

I have very limited experience with Ganglia, so will unfortunately not going to be able to help you there.

Thanks anyway. But nevertheless I manage to get it work.
The trick was to remove the host attribute in the tag tcp_accept_channel.
ie
tcp_accept_channel {
port = 8649
}
Hope this helps somebody in the future!!!

Hi @Christian_Dahlqvist

Sorry for the disturbance, in the last configuration, I had two machines: one handling my elk and another one handling my ganglia. The gmond.conf was configured to send its metrics to the machine holding elk. By so doing and with you help, I was able to send metrics to elk.

Now I want to consolidate both service on the same machine, with same configuration of my logstash
ie
input {
ganglia {
port => 8649
}
}
output {
stdout {
codec => rubydebug
}
}
I had the following exception
:timestamp=>"2016-02-02T19:30:46.202000+0000", :message=>"ganglia udp listener died", :address=>"0.0.0.0:8649", :exception=>#<SocketError: bind: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:160:in bind'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-ganglia-2.0.4/lib/logstash/inputs/ganglia.rb:55:inudp_listener'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-ganglia-2.0.4/lib/logstash/inputs/ganglia.rb:36:in run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/pipeline.rb:180:ininputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.0.0-java/lib/logstash/pipeline.rb:174:in `start_input'"], :level=>:warn}

Please can you help me sort out what is going on?
Thanks.