Moving from Logstash to Filebeat

were Moving from logstash to filebeat as a part of our transition to containers.

our old logstash configuration was:

input {
  file {
    path => "/servicename/_logs/servicename.log"
    codec => multiline {
      pattern => "(^[a-zA-Z.]+(?:Error|Exception).+)|(^\s+at .+)|(^\s+... \d+ more)|(^\t+)|(^\s*Caused by:.+)"
      what => "previous"

filter {
  if "multiline" not in [tags]{
  json {
    source => "message"
    remove_field => ["[request][body]","[response][body][response][items]"]
else {
  grok {
    pattern_definitions => { APPJSON => "{.*}"  }
    match => { "message" => "%{APPJSON:appjson} %{GREEDYDATA:stack_trace}"}
    remove_field => ["message"]
  json {
    source => "appjson"
    remove_field => ["appjson"]

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "logstash-servicename-%{+YYYY.MM.dd}"
    document_type => "logs"

To overcome the json issue we used the decode json:

  - decode_json_fields:
      fields: ["log"]
      target: ""

which worked fine.

we now have a problem with the java exceptions that are not digested well with elasticsearch causing many events to not index and getting these arrors in filebeat:

2019-11-12T07:57:29.815Z ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure

2019-11-12T07:57:29.826Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.3.2

2019-11-12T07:57:29.863Z INFO template/load.go:73 Template already exists and will not be overwritten.

2019-11-12T07:57:30.552Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s

and these wierd delays:

we need a way to handle these like we did before. we've tried multiline processing:

          pattern: (^[a-zA-Z.]+(?:Error|Exception).+)|(^\s+at .+)|(^\s+... \d+ more)|(^\t+)|(^\s*Caused by:.+)
          negate: false
          match: after

but without success. (is there a need for ingest node here? if so what will be its configuration?)

any help will be much appreciated!

Hi @yavidor,

You cannot do multiline after using decode_json_fields. Perhaps you can try using the json parameters in the log input? See for more details.

Best regards