Short_message nil value

Hi all,
I am trying to send system metrics using metricbeat 6.3.0 to Logstash 6.3.0 but I am getting this error.
Trouble sending GELF event {:gelf_event=>{"short_message"=>nil

The issue seems to be that the short_message value is not generated/included.
Now I have tried to follow some advised that I found scattered on the web without success e.g. by using Logstash filter.

Could anyone please advise on a solution?
Thank you for your time.
Regards

Can you share your setup and Metricbeat and Logstash config file? Not sure how GELF fits into the equation here?

Hi ruflin,
thanks for your reply.
I am testing a possible architecture for production with Beats, Logstash and Graylog.
Filebeat works fine but Metricbeat seems to have an issue with assign a value to short_message.
So the issue is that Logstash does not send the Metricbeat output to Graylog because I assume is missing the short_message.


metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  #reload.period: 10s

metricbeat.modules:
- module: system
  metricsets:
    - cpu             # CPU usage
    - filesystem      # File system usage for each mountpoint
    - fsstat          # File system summary metrics
    - load            # CPU load averages
    - memory          # Memory usage
    - network         # Network IO
    - process         # Per process metrics
    - process_summary # Process summary
    - uptime          # System Uptime
    #- core           # Per CPU core usage
    #- diskio         # Disk IO
    #- raid           # Raid
    #- socket         # Sockets and connection info (linux only)  
  enabled: true
  period: 60s
  processes: ['.*']
  cpu.metrics:  ["percentages"]  # The other available options are normalized_percentages and ticks.
  core.metrics: ["percentages"]  # The other available option is ticks.
  #process.include_cpu_ticks: false

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.101.12.109:5044"]
  index: metricbeat

And this is Logstash:

input {
    beats {
        port => "5044"
    }
}

# The filter part of this file is commented out to indicate that it is
# optional.

#filter {
#       if [index] == "metricbeat" {
#       mutate {
#       replace => {"short_message" => "metrics"}
#       add_field => { "message" => "%{message}" }
#               }
#       }
#}

output {
    gelf{
        host => "10.101.12.108"
        port => "12201"
#       short_message => "metricbeat"
        }
}

I left all my commented filter tests in order to make it work without success.
Thank you for your time.
Regards

I found the in the LS docs:

The GELF short message field name. If the field does not exist or is empty, the event message is taken instead.

Filebeat always has a message field, Metricbeat doesn't. So I assume if you have short_message defined in the output, it works?

But then every output including Filebeat will have "metrics" as a short_message. That is why I tried to filter the metricbeats by index when in Logstash configuration:

if [index] == "metribeat"

and on Metribeat configuration:

index: metricbeat

Do you mean:

output {
    gelf{
        host => "10.101.12.108"
        port => "12201"
        short_message => "@message"
        }
}

Like so?
Thanks

You could also use the metadata for it: %{[@metadata][beat]}. We use this also for the index naming. See https://www.elastic.co/guide/en/elastic-stack-overview/6.3/get-started-elastic-stack.html#logstash-setup

Thank you ruflin,
I am fairly new to this, would you please post an example that would work with my set-up?
Meaning the actual output configuration that includes your suggestion.
Thanks

Can you post what you tried with the metadata and didn't work?

I am doing:

input {
    beats {
        port => "5044"
    }
}

# The filter part of this file is commented out to indicate that it is
# optional.

#filter {
#	if [index] == "metricbeat" {
#	mutate {
#	replace => {"short_message" => "metrics"}
#	add_field => { "message" => "%{message}" }
#		}
#	}
#}

output {
    gelf{
	host => "10.101.12.108"
	port => "12201"
	short_message => %{[@metadata][beat]}
	}
}

and I get:

[2018-07-20T08:41:11,754][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x4f861e25 sleep>"}
[2018-07-20T08:41:13,364][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-07-20T08:41:13,365][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-07-20T08:41:13,951][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>[]}
[2018-07-20T08:41:13,952][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/test.conf"}
[2018-07-20T08:41:13,952][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2018-07-20T08:41:13,953][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Reload/pipeline_id:main}
[2018-07-20T08:41:13,974][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Expected one of #, \", ', -, [, { at line 23, column 19 (byte 370) after output {\n    gelf{\n\thost => \"10.101.12.108\"\n\tport => \"12201\"\n\tshort_message => ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:49:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/reload.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:305:in `block in converge_state'"]}
[2018-07-20T08:41:16,756][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x4f861e25 sleep>"}

Thanks

As soon as I posted the above I realised I was missing the double quotes ""...
I will try to start metricbeat and see if it works.
Thank you

Same WARNING

[2018-07-20T08:47:57,707][WARN ][logstash.outputs.gelf    ] Trouble sending GELF event {:gelf_event=>{"short_message"=>nil, "full_message"=>"%{message}
:error=>#<ArgumentError: short_message is missing. Options version, short_message and host must be set.>}

and nothing get pushed to Graylog.
Thanks

It sounds like the metadata is not set for some reason. Which version of Beats do you use?

metricbeat version 6.3.0 (386), libbeat 6.3.0 [a04cb664d5fbd4b1aab485d1766f3979c138fd38 built 2018-06-11 22:40:21 +0000 UTC]

{:gelf_event=>{"short_message"=>nil, "full_message"=>"%{message}

This tells me that you do not have a field called message in your original event AND [@metadata][beat] too.

Add this to your output section in the LS config while debugging:

	stdout {
		codec => rubydebug {metadata => true}
	}

It will print out the LS Event so you can double check what you have before you mutate the event.
Post the output of one event here in triple backticks. Redact any sensitive data before you post it here.

Hi guyboertje,

{
        "system" => {
        "load" => {
                "1" => 0,
            "cores" => 8,
               "15" => 0.12,
                "5" => 0.06,
             "norm" => {
                 "1" => 0,
                "15" => 0.015,
                 "5" => 0.0075
            }
        }
    },
    "@timestamp" => 2018-07-24T07:29:00.224Z,
     "@metadata" => {
              "beat" => "metricbeat",
              "type" => "doc",
           "version" => "6.3.0",
        "ip_address" => "10.101.34.234"
    },
          "host" => {
        "name" => "my-server-name"
    },
          "beat" => {
        "hostname" => "my-host-name",
            "name" => "hostname",
         "version" => "6.3.0"
    },
      "@version" => "1",
     "metricset" => {
           "rtt" => 155,
        "module" => "system",
          "name" => "load"
    },
          "tags" => [
        [0] "beats_input_raw_event"
    ]
}

one more

        "system" => {
        "process" => {
            "summary" => {
                 "running" => 0,
                   "total" => 171,
                 "stopped" => 0,
                    "idle" => 0,
                  "zombie" => 0,
                "sleeping" => 171,
                 "unknown" => 0
            }
        }
    },
    "@timestamp" => 2018-07-24T07:36:21.681Z,
     "@metadata" => {
              "beat" => "metricbeat",
              "type" => "doc",
           "version" => "6.3.0",
        "ip_address" => "10.101.34.234"
    },
          "host" => {
        "name" => "my-server-name"
    },
          "beat" => {
        "hostname" => "my-host-name",
            "name" => "hostname",
         "version" => "6.3.0"
    },
      "@version" => "1",
     "metricset" => {
              "rtt" => 40088,
           "module" => "system",
        "namespace" => "system.process.summary",
             "name" => "process_summary"
    },
          "tags" => [
        [0] "beats_input_raw_event"
    ]
}

Thanks

OK, so now in the beats input add a field "message", we can use any value for now.

input {
    beats {
        port => "5044"
        add_field => { "[message]" => "dummy_message" }
    }
}

Test it with the stdout output as before - there should be a message field with "dummy_message".
Then change the value string to get the interpolated metadata:
add_field => { "[message]" => "%{[@metadata][beat]}" }
Test it with the stdout output as before - there should be a message field with "metricbeat"
Then switch to the gelf output and report back.

1 Like

That seems to have worked:

DUMMY

}
        "system" => {
        "load" => {
                "1" => 0.19,
            "cores" => 8,
               "15" => 0.21,
                "5" => 0.2,
             "norm" => {
                 "1" => 0.0238,
                "15" => 0.0263,
                 "5" => 0.025
            }
        }
    },
    "@timestamp" => 2018-07-24T13:23:37.321Z,
     "@metadata" => {
              "beat" => "metricbeat",
              "type" => "doc",
           "version" => "6.3.0",
        "ip_address" => "my_ip"
    },
          "host" => {
        "name" => "myname"
    },
          "beat" => {
        "hostname" => "my_hostname",
            "name" => "my_name",
         "version" => "6.3.0"
    },
      "@version" => "1",
     "metricset" => {
           "rtt" => 448,
        "module" => "system",
          "name" => "load"
    },
       "message" => "dummy_message",
          "tags" => [
        [0] "beats_input_raw_event"
    ]
}

METADATA:

{
        "system" => {
        "cpu" => {
              "total" => {
                "pct" => 0
            },
             "system" => {
                "pct" => 0
            },
              "cores" => 8,
            "softirq" => {
                "pct" => 0
            },
               "idle" => {
                "pct" => 0
            },
              "steal" => {
                "pct" => 0
            },
                "irq" => {
                "pct" => 0
            },
             "iowait" => {
                "pct" => 0
            },
               "user" => {
                "pct" => 0
            },
               "nice" => {
                "pct" => 0
            }
        }
    },
    "@timestamp" => 2018-07-24T13:31:28.385Z,
     "@metadata" => {
              "beat" => "metricbeat",
              "type" => "doc",
           "version" => "6.3.0",
        "ip_address" => "my_ip"
    },
          "host" => {
        "name" => "my_name"
    },
          "beat" => {
        "hostname" => "my_hostname",
            "name" => "my_name",
         "version" => "6.3.0"
    },
      "@version" => "1",
     "metricset" => {
           "rtt" => 451,
        "module" => "system",
          "name" => "cpu"
    },
       "message" => "metricbeat",
          "tags" => [
        [0] "beats_input_raw_event"
    ]
}

GELF

[2018-07-24T14:32:53,424][DEBUG][logstash.outputs.gelf    ] Sending GELF event {:event=>{"short_message"=>"metricbeat", "full_message"=>"metricbeat", "host"=>"{\"name\":\"my_host\"}", "_beat_version"=>"6.3.0", "_beat_name"=>"beat_name", "_beat_hostname"=>"my_hostname", "_system_process"=>{"cwd"=>"/srv/gLive/bin", "cmdline"=>"./Synchroniser -c ../conf/synchroniser.conf", "state"=>"sleeping", "name"=>"Synchroni", "username"=>"my_username", "pid"=>8231, "cpu"=>{"start_time"=>"2018-07-17T14:57:31.000Z", "total"=>{"pct"=>0.0148, "norm"=>{"pct"=>0.0019}, "value"=>10807020.0}}, "fd"=>{"limit"=>{"soft"=>1024, "hard"=>1024}, "open"=>10}, "memory"=>{"share"=>9981952, "rss"=>{"pct"=>0.0123, "bytes"=>419389440}, "size"=>510283776}, "ppid"=>1, "pgid"=>8231}, "_tags"=>"beats_input_raw_event", "_metricset_rtt"=>103098, "_metricset_name"=>"process", "_metricset_module"=>"system", "level"=>6}}

Two considerations:

  1. will not this append metricbeat to all my Beats?
  2. Is the missing value (short_message) to be considered a bug?

Thank you.

Now that you know you have a working solution to the GELF output problem, you can use other filters to craft the desired message value. You can experiment with creating (in LS filters) a different short_message to the longer message value.

Not really, Logstash has a convention of having a message field. Auditbeat does not know about GELF etc.
When you use the GELF output it requires a message field either in the Event or in the config - if you supply it in the config then it can be dynamic (interprets the "%{}"), however, short_message in the config is not dynamic.

So:

output {
  gelf {
    host => "10.101.12.108"
    port => "12201"
    short_message => "%{[@metadata][beat]}" # fails
    message => "%{[@metadata][beat]}"       # works
  }
}

But this works too and then you only need the host and port in the gelf output:

filter {
  mutate {
    replace => {
      "[short_message]" => "%{[host][name]}-%{[metricset][namespace]}"
      "[message]" => "%{[@metadata][beat]} %{[host][name]} %{[metricset][module]} %{[metricset][name]}"
    }
  }
}
1 Like

Thank you for your help.
I am trying as advised with filters and other settings; please bear with me.

If you:

output {
    gelf{
       host => "10.101.12.108"
       port => "12201"
       message => "%{[@metadata][beat]}"
       }
}

You get:

[2018-07-25T10:29:48,516][ERROR][logstash.outputs.gelf    ] Unknown setting 'message' for gelf

Whereas if

output {
    gelf{
       host => "10.101.12.108"
       port => "12201"
       short_message => "%{[@metadata][beat]}"
       }
}

You don't get any configuration ERROR but you get the usual:

Trouble sending GELF event {:gelf_event=>{"short_message"=>nil, "full_message"=>"%{message}"

Also with filters you get some odd behaviours which I will post separately just for completion purposes e.g. if:

filter {
  mutate {
    replace => {
       "[short_message]" => "%{[host][name]}-%{[metricset][namespace]}"
       "[message]" => "%{[@metadata][beat]} %{[host][name]} %{[metricset][module]} %{[metricset][name]}"
  }
 }
}

then you will lose the message content of Filebeat (Metricbeat will have the metrics but not preview/short message annotation in the Web GUI) that is to say not log content in Elasticsearch to work with.
As I said I don't want to make a long post.

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.