Output to any output is not working


(Peter) #1

Hello,

I am importing logs via filebeat 6.5.0 into logstash. If in debug mode I can see the data being taken by logstash but it is not sent to any output I have tried stdout output and elasticsearch output but did not got any results.

The config is very simple:

input {
  beats {
    # The port to listen on for filebeat connections.
    port => 5044
  }
}
output {
  if "jetty" in [service] {
    stdout {
      codec => rubydebug
    }
    elasticsearch {
      hosts => ["http://127.0.0.1:9200"]
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}=%{+YYYY.MM.dd}"
    }
  }
}

Any idea how to debug this and how to make it work? Thanks in advance.


(Yashodhara) #2

Same Issue, 6.5 Logstash Output does not seem to work


(Christian Dahlqvist) #3

Can you please move the stdout plugin outside the conditional and show us what an event look like? Do you really have a field named service that contains the value jetty?


(Peter) #4

this is coming from filebeat.yml:

  fields:
    service: jetty
    "@environment": RCC-RCDEV

file output plugin seam to work. Only the elasticsearch does not seam to get the data.


(Christian Dahlqvist) #5

If you put the stdout outside the conditional as I recommended earlier, I suspect you should have seen that these additional fields come as [fields][service] and not [service].


(Peter) #6

some errors appeared on the logstash logs.

[2018-11-21T10:24:37,189][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.5.0-2018.06.23", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x7c6df811>], :response=>{"index"=>{"_index"=>"filebeat-6.5.0-2018.06.23", "_type"=>"doc", "_id"=>"iTjfNmcB1YfWEZjRg40W", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [service] tried to parse field [service] as object, but found a concrete value"}}}}

(Peter) #7

I have renamed the field in the filebeat so that it would not use a field which is already mapped in the template. I have output-ed in stdout which works fine but the elasticsearch output is still not working. Elasticsearch is up and running since other beats like metricbeat is working fine with it. I have no idea why some of the parse output goes into elasticsearch and others won't since they all come from the same servers and in the stdout they look valid json format with no errors. For example:

{
         "services" => "jetty",
            "offset" => 45560500,
           "message" => "[GC (Allocation Failure)  1010314K->688618K(1039872K), 0.0265723 secs]",
              "tags" => [
     [0] "jetty",
     [1] "beats_input_codec_plain_applied"
 ],
            "input" => {
     "type" => "log"
 },
 "current_heap_size" => "1039872",
        "@timestamp" => 2018-11-22T02:12:48.907Z,
             "beat" => {
         "name" => "vps66968.redcapcloud.com",
      "version" => "6.5.0",
     "hostname" => "vps66968.redcapcloud.com"
 },
        "used_start" => "1010314",
     "used_after_gc" => "688618",
           "gc_time" => "0.0265723",
        "prospector" => {
     "type" => "log"
 },
      "@environment" => "RCC-RCDEV",
           "source" => "/home/jetty/logs/2018-11-21-16-28-jetty.log",
              "meta" => {
     "cloud" => {
              "machine_type" => "vps-ovhssd-3",
         "availability_zone" => "nova",
               "instance_id" => "i-00147482",
             "instance_name" => "vps66968",
                  "provider" => "openstack"
     }
 },
          "@version" => "1",
            "system" => {
     "jetty" => {
          "method" => "GC",
         "severity" => "FINEST",
             "data" => "[GC (Allocation Failure)  1010314K->688618K(1039872K), 0.0265723 secs]"
     }
 }
 }

as working output which got into elasticsearch and:

{
     "services" => "jetty",
       "offset" => 45560358,
      "message" => "Nov 21, 2018 9:12:47 PM com.candorgrc.core.webservice.rest.exceptionmapper.ExceptionsMapper toResponse\nSEVERE: \njava.lang.ClassCastException\n",
         "tags" => [
     [0] "jetty",
     [1] "beats_input_codec_plain_applied"
 ],
        "input" => {
     "type" => "log"
 },
   "@timestamp" => 2018-11-21T21:12:47.000Z,
         "beat" => {
         "name" => "vps66968.redcapcloud.com",
      "version" => "6.5.0",
     "hostname" => "vps66968.redcapcloud.com"
 },
          "log" => {
     "flags" => [
         [0] "multiline"
     ]
 },
   "prospector" => {
     "type" => "log"
 },
 "@environment" => "RCC-RCDEV",
       "source" => "/home/jetty/logs/2018-11-21-16-28-jetty.log",
         "meta" => {
     "cloud" => {
              "machine_type" => "vps-ovhssd-3",
         "availability_zone" => "nova",
               "instance_id" => "i-00147482",
             "instance_name" => "vps66968",
                  "provider" => "openstack"
     }
 },
     "@version" => "1",
       "system" => {
     "jetty" => {
            "class" => "com.candorgrc.core.webservice.rest.exceptionmapper.ExceptionsMapper",
           "method" => "toResponse",
         "severity" => "SEVERE",
             "data" => "\njava.lang.ClassCastException\n"
    }
 }
 }

This is as failed result(not got into elasticsearch for some reason which I cannot find).

Any idea what can be the difference between the two outputs of the same file and same filebeat parsed by the same logstash.


(Peter) #8

I have a rule which takes the date from the log files and changes the @timestamp field with that value, however that is not recognized as a timestamp and that is the reason those logs are not showing up in elasticsearch. Something might have changed since 6.4 because it's working on that one.