Messages from gelf input not ending-up in ES

Hi,

I was hoping to use the gelf input plugin to receive messages and put them into ES.
This is my config:

input {
   gelf {
    port => 12201
    type => gelf
    host => "0.0.0.0"
  }
}

...

output {
    file {
        path => "/home/myuser/gelfoutput/gelf_output.txt"
    }
}

output {
  elasticsearch {
    hosts => ["https://mycloud:9243"]
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "xxxx"
    password => "xxxx"
    document_id => "%{[@metadata][fingerprint]}"
  }
}

Strangely the messages are successfully being written to my file output but not to ES. I don't see any errors in the logstash log-files.

Any ideas ?

Do you have a fingerprint metadata field in all your documents? Is noting being written to the current (default) logstash-* index?

I do:

filter {
  fingerprint {
    source => ["@timestamp","message"]
    target => "[@metadata][fingerprint]"
    method => "SHA256"
    key => "xxx"
  }
}

There is nothing from the gelf input written to that index, at least nothing that I could see in ES. I have another filebeats input and those messages are showing up in ES.

If Logstash can not write to Elasticsearch, it will stop and retry until successful. If you are constantly seeing new self data being written to the file, it should therefore also be going into Elasticsearch. How do you identify data coming in through the gelf plugin?

That's what I'd have expected but it's just simply not showing up.

I'm just looking at the Discover page in Kibana and while all other messages are shown, the ones from gelf are not.

Try adding a tag or field in the gelf input and then filter on this in the discover screen. The Discover screen only shows a sample, so if the amount of data coming in via the gelf plugin is small it might easily get missed.

We're just getting started with logstash, so our input volume is very very low. Not a single message is being shown for the last 15 Minutes and I've sent multiple gelf messages 1 minute ago.
When I widen the windows to Today I can see all other messages from filebeats.

Then I would recommend enabling debug logging in Logstash to see if there is any error reported.

Good idea!

Here's what I see when I send a gelf message:

[2018-11-12T14:45:43,131][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-11-12T14:45:43,131][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-11-12T14:45:43,903][DEBUG][logstash.pipeline        ] filter received {"event"=>{"source_host"=>"10.122.64.20", "level"=>6, "version"=>"1.1", "host"=>"hostname", "@timestamp"=>2018-11-12T13:45:43.874Z, "message"=>"hello gelf", "@version"=>"1", "type"=>"gelf"}}
[2018-11-12T14:45:43,904][DEBUG][logstash.filters.grok    ] Running grok filter {:event=>#<LogStash::Event:0x6c987423>}
[2018-11-12T14:45:43,905][DEBUG][logstash.filters.grok    ] Event now:  {:event=>#<LogStash::Event:0x6c987423>}
[2018-11-12T14:45:43,906][DEBUG][logstash.pipeline        ] output received {"event"=>{"source_host"=>"10.122.64.20", "level"=>6, "version"=>"1.1", "host"=>"hostname" "tags"=>["_grokparsefailure"], "@timestamp"=>2018-11-12T13:45:43.874Z, "message"=>"hello gelf", "@version"=>"1", "type"=>"gelf"}}
[2018-11-12T14:45:43,967][DEBUG][logstash.outputs.file    ] File, writing event to file. {:filename=>"/home/iniuser/gelfoutput/gelf_output.txt"}
[2018-11-12T14:45:43,968][DEBUG][logstash.outputs.file    ] Starting stale files cleanup cycle {:files=>{"/home/iniuser/gelfoutput/gelf_output.txt"=>#<IOWriter:0x6707d533 @active=true, @io=#<File:/home/iniuser/gelfoutput/gelf_output.txt>>}}
[2018-11-12T14:45:43,968][DEBUG][logstash.outputs.file    ] 0 stale files found {:inactive_files=>{}}
[2018-11-12T14:45:44,114][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2018-11-12T14:45:44,115][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/home/iniuser/gelfoutput/gelf_output.txt", :fd=>#<IOWriter:0x6707d533 @active=false, @io=#<File:/home/iniuser/gelfoutput/gelf_output.txt>>}
[2018-11-12T14:45:46,115][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2018-11-12T14:45:46,115][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/home/iniuser/gelfoutput/gelf_output.txt", :fd=>#<IOWriter:0x6707d533 @active=false, @io=#<File:/home/iniuser/gelfoutput/gelf_output.txt>>}
[2018-11-12T14:45:46,609][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x64ac7842 sleep>"}
[2018-11-12T14:45:48,116][DEBUG][logstash.outputs.file    ] Starting flush cycle
[2018-11-12T14:45:48,116][DEBUG][logstash.outputs.file    ] Flushing file {:path=>"/home/iniuser/gelfoutput/gelf_output.txt", :fd=>#<IOWriter:0x6707d533 @active=false, @io=#<File:/home/iniuser/gelfoutput/gelf_output.txt>>}
[2018-11-12T14:45:48,134][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-11-12T14:45:48,134][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

I don't see it say anything about ES but I'm not sure why that would be. No errors though.

You do not seem to have showed your complete config. Might there be an issue in the parts we have not seen, e.g. conditionals?

Thank you for bearing with me :slight_smile:

My complete config1:

input {
  beats {
    port => 5044
    host => "0.0.0.0"
  }
}

filter {
  grok {
    match => { "message" => ["\A%{TIMESTAMP_ISO8601}\s-\s[a-zA-Z0-9]+\s-\s%{LOGLEVEL:log-level}\s-\sBestelltnummer:\s(?<bestellnummer>[0-9]{9}),\sILN:\s(?<iln>[0-9]{13}),\sKundenNr\. (?<kundennr>[0-9]{6}),\s(?<stueckzahl>[0-9]{1,3})\sST,\sArtNr.:\s(?<artikelnr>[0-9]{13,14})", "\A%{TIMESTAMP_ISO8601}\s-\s[a-zA-Z0-9]+\s-\s%{LOGLEVEL:log-level}\s-\s%{DATA}:\s(?<ris-docid>%{GREEDYDATA})", "\A%{TIMESTAMP_ISO8601}\s-\s[a-zA-Z0-9]+\s-\s%{LOGLEVEL:log-level}\s-\s"] }
  }
  mutate {
    convert => { "stueckzahl" => "integer" }
  }
  #fingerprint {
  #    source => ["@timestamp","message"]
  #    target => "[@metadata][fingerprint]"
  #    method => "SHA256"
  #    key => "xxx"
  #}
  #ruby {
  #    code => "event.set('@metadata[tsprefix]', event.get('@timestamp').to_i.to_s(16))"
  #}
}
filter {
  fingerprint {
    source => ["@timestamp","message"]
    target => "[@metadata][fingerprint]"
    method => "SHA256"
    key => "xxxxx"
  }
}

output {
  elasticsearch {
    hosts => ["https://mycloud:9243"]
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "xxx"
    password => "xxx"
    document_id => "%{[@metadata][fingerprint]}"
  }
}

In a 2nd config-file:

input {
   gelf {
    port => 12201
    type => gelf
    host => "0.0.0.0"
  }
}

output {
    file {
        path => "/home/myuser/gelfoutput/gelf_output.txt"
    }
}

Any ideas ? I'm kinda lost here.

Okay....I've decided to try and put the messages from gelf in a different index. Turns out this works which probably means ES is rejecting the messages because of some field-mismatch.

How do I find out what exactly is causing this, without getting any errors ? Because the ES logs (in the cloud interface) are not showing anything other than INFOs on snapshots.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.