Fingerprint for json does not get resolved

fingerprint for json is not working

input {
file {
        path => "/shared/logs/logi2/stats.*"
        start_position => "beginning"
        sincedb_path => "/shared/logs/.sincedb"
        type => "logi2-stats"
        codec => json
      }
}

filter {
grok {
else if [type] =~ "logi2-messages" {
            mutate {
               add_field => { "[@metadata][subtype]" => "all" }
            }
            mutate {
              lowercase => [ "[@metadata][subtype]" ]
            }
      } 
      }
      }
output {
if [type] == "logi-messages" or [type] == "logi-stats" or  [type] == "logi2-messages" or [type] == "logi2-stats"  {
        elastic {
          hosts => ["******"]
          template_overwrite => true
          ssl => true
          index => "%{$example}-%{+YYYY.MM.dd}"
          document_id => "%{fingerprint}"
        }
        }
        }
       
cat duplicate.conf
    filter {
      fingerprint {
        method => "SHA1"
        target => "fingerprint"
      }

The Value for it prints
image

Hi @ranjini,

Is your fingerprint filter in a different file from the rest of your configuration? If so, what happens when you add it to the filter section above the output section?

All my configuration are in different file.
I call logstash this way so it should be fine. is it not ?

 /usr/share/logstash/bin/logstash --path.settings=/usr/share/logstash/config -f /usr/share/logstash/conf.d/ -w 2

It depends on the name of your other files and what is the configuration on them, you need to share the configuration of all the files you have in that path in the order they appear when you run ls -lh.

Logstash will merge the files in that order, and it doesn't seem that you shared your complete configuration.

1 Like

These are the files I have


cat input.conf

input {
file {
        path => "/shared/logs/logi2/stats.*"
        start_position => "beginning"
        sincedb_path => "/shared/logs/.sincedb"
        type => "logi2-stats"
        codec => json
      }
}
}

cat filter.conf

filter {
grok {
if [type] =~ "logi2-messages" {
            mutate {
               add_field => { "[@metadata][subtype]" => "all" }
            }
            mutate {
              lowercase => [ "[@metadata][subtype]" ]
            }
      } 
      }
      }


cat duplicate.conf
    filter {
      fingerprint {
        method => "SHA1"
        target => "fingerprint"
      }
    }
    
    cat output.conf

output {
if [type] == "logi-messages" or [type] == "logi-stats" or  [type] == "logi2-messages" or [type] == "logi2-stats"  {
        elastic {
          hosts => ["{{.Values.esearch_seeds}}:{{.Values.logstash_service.properties.port}}"]
          template_overwrite => true
          ssl => true
          index => "%{type}-%{[@metadata][subtype]}-%{+YYYY.MM.dd}"
          document_id => "%{fingerprint}"
        }
        }
        }

bash-4.4$ cd conf.d/
bash-4.4$ ls -ltra
total 44
-rwxr-xr-x 1 1242 Nov 9 11:39 output.conf
-rwxr-xr-x 1 17161 Nov 9 11:39 input.conf
-rwxr-xr-x 1 80 Nov 9 11:39 duplicate.conf
-rwxr-xr-x 1 15882 Nov 9 11:39 filter.conf

Are you sure that logstash is running those files? There are a couple of errors in your configuration that would make logstash no run.

For example:

You have an if inside a grok filter, this is not supported and Logstash would not load this configuration file

Also here, you have an extra closing bracket, which would also make Logstash not run.

It is not clear if this was an typo or it is really wrong and Logstash is not working right.

Can you concatenate your files into just one file and share this file?

Just run cat *.conf > /tmp/pipeline.conf, then copy the content of the pipeline.conf and share it.

bash-4.4$ cat /tmp/pipeline.conf 

filter {
  if [type] =~ "boot" {
    # The boot log does not follow any specific format
  } else if [type] =~ "default" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\|%{DATA:class}\|%{DATA:tenant}\|%{IP:ip}\|%{GREEDYDATA:msg}"}
    }
    mutate{
      lowercase=>["application"]
    }
  } else if [type] =~ "eval" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:ip}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "zoo" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "bridge" {
    grok {
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:thread}\] %{GREEDYDATA:msg}"}
    }        
  } else if [type] =~ "kafka" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "iam" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "cloud" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "mock" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "service" {
    grok {
      match => {"message" => "%{TIMESTAMP_ISO8601}\|%{LOGLEVEL:level}\|%{DATA:application}\|%{DATA:env}\|%{DATA:dc}\|%{HOSTNAME:host}\/%{IP:ip}\|%{DATA:class}\|%{DATA:tenant}\|%{DATA:transaction}\|%{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "web" {
    grok {
      match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
      overwrite => [ "message" ]
    }

    mutate {
      convert => ["response", "integer"]
      convert => ["bytes", "integer"]
      convert => ["responsetime", "float"]
    }

    geoip {
      source => "clientip"
      target => "geoip"
      add_tag => [ "nginx-geoip" ]
    }

    date {
      match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
      remove_field => [ "timestamp" ]
    }

    useragent {
      source => "agent"
    }
  } else if [type] =~ "enginx" {
    grok {
      match => [ "message" , "(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<client>%{IP}|%{HOSTNAME}))(?:, server: (%{IPORHOST:server}|_))(?:, request: %{QS:req})?(?:, upstream: \"%{URI:upstream}\")?(?:, host: %{QS:host})?(?:, referrer: \"%{URI:referrer}\")?"]  overwrite => [ "message" ]
    }

    geoip {
      source => "client"
      target => "geoip"
      add_tag => [ "nginx-geoip" ]
    }

    date {
      match => [ "timestamp" , "YYYY/MM/dd HH:mm:ss" ]
      remove_field => [ "timestamp" ]
    }
  } else if [type] =~ "json" {
    mutate {
      remove_field => [ "[request][headers][authorization]" ]
    }
  } else if [type] =~ "logi-stats" {
        grok {
          match => {"message" => "^\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{LOGLEVEL:level}.*\]\[%{DATA:containertype}]\[%{DATA:containername}]\[%{DATA:stattype}] %{GREEDYDATA:msg}"}
        }
        date {
          locale => en
          match => [ "timestamp", "ISO8601" ]
          target => "@timestamp"
          remove_field => [ "timestamp" ]
        }
        kv{}
        mutate {
        convert => {
            "process.bad.count" => "integer"
            "process.bad.totalduration" => "integer"
            "process.bad.maxduration" => "integer"
            "process.bad.minduration" => "integer"
            "process.discarded.count" => "integer"
            "process.discarded.maxduration" => "integer"
            "process.discarded.minduration" => "integer"
            "process.discarded.totalduration" => "integer"
            "process.good.count" => "integer"
            "process.good.maxduration" => "integer"
            "process.good.minduration" => "integer"
            "process.good.totalduration" => "integer"
            "read.bad.count" => "integer"
            "read.bad.totalduration" => "integer"
            "read.bad.maxduration" => "integer"
            "read.bad.minduration" => "integer"
            "read.discarded.count" => "integer"
            "read.discarded.totalduration" => "integer"
            "read.discarded.maxduration" => "integer"
            "read.discarded.minduration" => "integer"
            "read.good.count" => "integer"
            "read.good.maxduration" => "integer"
            "read.good.minduration" => "integer"
            "read.good.totalduration" => "integer"
            "transaction.bad.count" => "integer"
            "transaction.bad.totalduration" => "integer"
            "transaction.bad.maxduration" => "integer"
            "transaction.bad.minduration" => "integer"
            "transaction.discarded.count" => "integer"
            "transaction.discarded.maxduration" => "integer"
            "transaction.discarded.minduration" => "integer"
            "transaction.discarded.totalduration" => "integer"
            "transaction.good.count" => "integer"
            "transaction.good.maxduration" => "integer"
            "transaction.good.minduration" => "integer"
            "transaction.good.totalduration" => "integer"
            "write.bad.count" => "integer"
            "write.bad.totalduration" => "integer"
            "write.bad.maxduration" => "integer"
            "write.bad.minduration" => "integer"
            "write.discarded.count" => "integer"
            "write.discarded.totalduration" => "integer"
            "write.discarded.maxduration" => "integer"
            "write.discarded.minduration" => "integer"
            "write.good.count" => "integer"
            "write.good.maxduration" => "integer"
            "write.good.minduration" => "integer"
            "write.good.totalduration" => "integer"
            "purge_deleted.bad.count" => "integer"
            "purge_deleted.bad.totalduration" => "integer"
            "purge_deleted.discarded.count" => "integer"
            "purge_deleted.discarded.totalduration" => "integer"
            "purge_deleted.good.count" => "integer"
            "purge_deleted.good.totalduration" => "integer"
            "purge_found_pending.bad.count" => "integer"
            "purge_found_pending.bad.totalduration" => "integer"
            "purge_found_pending.discarded.count" => "integer"
            "purge_found_pending.discarded.totalduration" => "integer"
            "purge_found_pending.good.count" => "integer"
            "purge_found_pending.good.totalduration" => "integer"
            "purge_found_pristine.bad.count" => "integer"
            "purge_found_pristine.bad.totalduration" => "integer"
            "purge_found_pristine.discarded.count" => "integer"
            "purge_found_pristine.discarded.totalduration" => "integer"
            "purge_found_pristine.good.count" => "integer"
            "purge_found_pristine.good.totalduration" => "integer"
            "purge_made_pending.bad.count" => "integer"
            "purge_made_pending.bad.totalduration" => "integer"
            "purge_made_pending.discarded.count" => "integer"
            "purge_made_pending.discarded.totalduration" => "integer"
            "purge_made_pending.good.count" => "integer"
            "purge_made_pending.good.totalduration" => "integer"
            "purge_recovered_pending.bad.count" => "integer"
            "purge_recovered_pending.bad.totalduration" => "integer"
            "purge_recovered_pending.discarded.count" => "integer"
            "purge_recovered_pending.discarded.totalduration" => "integer"
            "purge_recovered_pending.good.count" => "integer"
            "purge_recovered_pending.good.totalduration" => "integer"
            "foundPristine" => "integer"
            "foundPending" => "integer"
            "recoveredPendingGood" => "integer"
            "madePendingGood" => "integer"
            "deletedGood" => "integer"
            "recoveredPendingBad" => "integer"
            "madePendingBad" => "integer"
            "deletedBad" => "integer"
            "ldap_cache_hit.count" => "integer"
            "ldap_cache_miss.count" => "integer"
            "ldap_lookup_hit.count" => "integer"
            "ldap_lookup_miss.count" => "integer"
            "process_ldap.bad.count" => "integer"
            "process_ldap.bad.totalduration" => "integer"
            "process_ldap.discarded.count" => "integer"
            "process_ldap.discarded.maxduration" => "integer"
            "process_ldap.discarded.minduration" => "integer"
            "process_ldap.discarded.totalduration" => "integer"
            "process_ldap.good.count" => "integer"
            "process_ldap.good.maxduration" => "integer"
            "process_ldap.good.minduration" => "integer"
            "process_ldap.good.totalduration" => "integer"
            "d2c_list.bad.count" => "integer"
            "d2c_list.bad.totalduration" => "integer"
            "d2c_list.good.count" => "integer"
            "d2c_list.good.maxduration" => "integer"
            "d2c_list.good.minduration" => "integer"
            "d2c_list.good.totalduration" => "integer"
            "d2c_list.discarded.count" => "integer"
            "d2c_list.discarded.totalduration" => "integer"
            "d2c_delete.bad.count" => "integer"
            "d2c_delete.bad.totalduration" => "integer"
            "d2c_delete.good.count" => "integer"
            "d2c_delete.good.maxduration" => "integer"
            "d2c_delete.good.minduration" => "integer"
            "d2c_delete.good.totalduration" => "integer"
            "d2c_delete.discarded.count" => "integer"
            "d2c_delete.discarded.totalduration" => "integer"
            "d2c_rename.bad.count" => "integer"
            "d2c_rename.bad.totalduration" => "integer"
            "d2c_rename.good.count" => "integer"
            "d2c_rename.good.maxduration" => "integer"
            "d2c_rename.good.minduration" => "integer"
            "d2c_rename.good.totalduration" => "integer"
            "d2c_rename.discarded.count" => "integer"
            "d2c_rename.discarded.totalduration" => "integer"
            "wt_incident_insert.bad.count" => "integer"
            "wt_incident_insert.bad.totalduration" => "integer"
            "wt_incident_insert.good.count" => "integer"
            "wt_incident_insert.good.totalduration" => "integer"
            "wt_incident_insert.discarded.count" => "integer"
            "wt_incident_insert.discarded.totalduration" => "integer"
            "wt_incident_update.bad.count" => "integer"
            "wt_incident_update.bad.totalduration" => "integer"
            "wt_incident_update.good.count" => "integer"
            "wt_incident_update.good.totalduration" => "integer"
            "wt_incident_update.discarded.count" => "integer"
            "wt_incident_update.discarded.totalduration" => "integer"
            "wt_queries.bad.count" => "integer"
            "wt_queries.bad.totalduration" => "integer"
            "wt_queries.good.count" => "integer"
            "wt_queries.good.maxduration" => "integer"
            "wt_queries.good.minduration" => "integer"
            "wt_queries.good.totalduration" => "integer"
            "wt_queries.discarded.count" => "integer"
            "wt_queries.discarded.totalduration" => "integer"
        }
        add_field => { "[@metadata][subtype]" => "%{stattype}" }
        }
        mutate {
           lowercase => [ "[@metadata][subtype]" ]
        }
   } else if [type] =~ "logi-messages" {
        grok {
          match => {"message" => "^\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{LOGLEVEL:level}.*\]\[%{DATA:containertype}\]\[%{DATA:containername}\]\[%{DATA:tenant}\] %{GREEDYDATA:msg}"}
        }
        date {
          locale => en
          match => [ "timestamp", "ISO8601" ]
          target => "@timestamp"
          remove_field => [ "timestamp" ]
        }
 
        mutate {
           add_field => { "[@metadata][subtype]" => "all" }
        }
        mutate {
          lowercase => [ "[@metadata][subtype]" ]
        }
   } else if [type] =~ "logi2-stats" {
        mutate {
             add_field => { "[@metadata][subtype]" => "%{[labels][EventClassID]}" }
        }
        mutate {
          lowercase => [ "[@metadata][subtype]" ]
        }
   } else if [type] =~ "logi2-messages" {
        mutate {
           add_field => { "[@metadata][subtype]" => "all" }
        }
        mutate {
          lowercase => [ "[@metadata][subtype]" ]
        }
  } else if [type] =~ "ndata-pi-developer" or [type] =~ "ndata-pi-admin" or [type] =~ "ndata-pi-event" or [type] =~ "ndata-pi-awsplace" or [type] =~ "epo-pi" or [type] =~ "ndata-pi-webhook" or [type] =~ "ndata-pi-application" or [type] =~ "ndata-vault" or [type] =~ "ndata-vault" or [type] =~ "ndata-acr" or [type] =~ "ndata-d2c" {
    grok{
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:traceId}\] \[%{DATA:tenantId}\] \[%{DATA:thread}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "ndata-event-engine" {
    grok{
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:traceId}\] \[%{DATA:tenantId}\] \[%{DATA:eventId}\] \[%{DATA:thread}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "nector" {
    grok{
      match => {"message" => "^\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "jwt-authorizer" {
    grok {
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} \[%{DATA:thread}\] \[%{DATA:tenantId}\] \[%{DATA:tokenId}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "ndata-cdc-application" {
    grok{
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:tenantId}\] \[%{DATA:thread}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "ndata-cds-application" {
    grok{
      match => {"message" =>  "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:thread}\] *\[%{DATA:tenantid}\] *\[%{DATA:clientid}\] *\[%{DATA:request-uri}\] *\[%{DATA:configid}\] %{GREEDYDATA:class} *\[%{DATA}\] [-] %{GREEDYDATA:actual_msg}"}
     }
  } else if [type] =~ "ndata-access-log" {
    grok {
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{DATA:application} %{IPV4:ipv4} %{DATA:user} %{DATA:auth} %{DATA:httpmethod} %{DATA:requestpath} %{DATA:protocol} %{DATA:status} %{DATA:size}"}
    }
  } else if [type] =~ "parser" or [type] =~ "parser-server-tasks-application" or [type] =~ "parser-reporting-application" or [type] =~ "parser-recovery-application" or [type] =~ "parser-key-application" {
    grok{
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} *\[%{DATA:tenantId}\] \[%{DATA:thread}\] %{GREEDYDATA:msg}"}
    }
  } else if [type] =~ "parser-logs" {
    grok {
      match => {"message" => "^%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level}%{SPACE}\[%{DATA:class}\] %{GREEDYDATA:msg}"}
    }
  }
}
filter {
  fingerprint {
    method => "SHA1"
    target => "fingerprint"
  }
}

input {

  file {
    path => "/shared/logs/parser-server-tasks-application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "parser-server-tasks-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/parser-reporting-application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "parser-reporting-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/parser-recovery-application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "parser-recovery-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/parser-key-application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "parser-key-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/parser/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "parser"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }

  file {
    path => "/shared/logs/bridge/**/bridge.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "bridge"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    # boot log
    path => "/shared/*.log"
    start_position => "beginning"
    sincedb_path => "/shared/.sincedb"
    type => "boot"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }

  file {
    path => "/shared/logs/**/access_log*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-access-log"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }

  file {
    # JSON-based log
    path => "/shared/logs/**/*.json"
    exclude => ["kong-statistics.json"]
    start_position => "beginning"
    sincedb_path => "/shared/.sincedb_json"
    type => "json"
    codec => json {
    }
  }

  file {
    # nginx error log
    path => "/shared/logs/**/*error.nginx.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_enginx"
    type => "enginx"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME})"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  
  file {
    path => "/shared/logs/eval/workers-artifacts/*/worker.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/eval/worker.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/eval/nimbus.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/eval/*supervisor.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000

    }
  }
  file {
    path => "/shared/logs/mockapp/mockapp.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "mock"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000

    }
  }
  file {
    path => "/shared/logs/eval/access-web-*.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/eval/access-web-supervisor.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "eval"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/kafka/server.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "kafka-kfk"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/zoo/zoo.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "zoo"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/iam-ngnix-access.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "iam"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/cloud/cloud.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "cloud"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/c-application/c-application.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "service"
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601}\|"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/logi/stats.{wt,enricher,ar,evpurge,disc}+*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "logi-stats"
    codec => multiline {
      pattern => "^\[%{TIMESTAMP_ISO8601}\]"
      negate => true
      what => previous
      max_lines => 1000
    }
  }

  file {
    path => "/shared/logs/logi/messages.{wt,enricher,ar,evpurge,disc}+*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "logi-messages"
    codec => multiline {
      pattern => "^\[%{TIMESTAMP_ISO8601}\]"
      negate => true
      what => previous
      max_lines => 1000
    }
  }

  file {
    path => "/shared/logs/logi2/stats.*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "logi2-stats"
    codec => json
  }

  file {
    path => "/shared/logs/logi2/messages.*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "logi2-messages"
    codec => json
  }

  file {
    path => "/shared/logs/dp-im-mw/*.jsonl"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_json"
    type => "dp-im-mw"
    codec => json {
    }
  }

  file {
    path => "/shared/logs/dp-cms-mw/*.jsonl"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_json"
    type => "dp-cms-mw"
    codec => json {
    }
  }

  file {
    path => "/shared/logs/dp-cmmw/*.jsonl"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_json"
    type => "dp-cmmw"
    codec => json {
    }
  }

  file {
    path => "/shared/logs/dp-logigsmw-mw/*.jsonl"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_json"
    type => "dp-logigsmw-mw"
    codec => json {
    }
  }

  file {
    path => "/shared/logs/dp-sm-mw/*.jsonl"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb_json"
    type => "dp-sm-mw"
    codec => json {
    }
  }

  file {
    path => "/shared/logs/ndata-vault-application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-vault"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs//app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/ndata-jwt-authorizer/archived/app/app-*log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "jwt-authorizer"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000 # To handle error multiline_codec_max_lines_reached
    }
  }
  file {
    path => "/shared/logs/engine/event-engine.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-event-engine"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/nector/nector.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "nector"
    codec => multiline {
      pattern => "^\[%{TIMESTAMP_ISO8601}\]"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/webhook/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-webhook"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/webhook/archived/app/app-*.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-webhook"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/developer/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-developer"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/event/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-event"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/awsplace/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-awsplace"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/admin/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-admin"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/application/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/application/archived/app/access_log*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-pi-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/ndata-cdc-application/*/cdc*"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-cdc-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/ndata-cds-application/ds.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-cds-application"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/ndata-acr/acr.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-acr"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
  file {
    path => "/shared/logs/ndata-d2c/app.log"
    start_position => "beginning"
    sincedb_path => "/shared/logs/.sincedb"
    type => "ndata-d2c"
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601}"
      negate => true
      what => previous
      max_lines => 1000
    }
  }
 
  }
}

output {
  if [type] =~ "default" {
    elastic {
      hosts => ["elastichost:443"]
      ssl => true
      template_overwrite => true
      index => "%{application}-%{+YYYY.MM.dd}"
      document_id => "%{fingerprint}"
    }
  } else if [type] =~ "json" {
    elastic {
      hosts => ["elastichost:443"]
      template_overwrite => true
      ssl => true
      index => "kong-%{+YYYY.MM.dd}"
      document_id => "%{fingerprint}"
    }
  } else if [type] == "logi-messages" or [type] == "logi-stats" or  [type] == "logi2-messages" or [type] == "logi2-stats"  {
    elastic {
      hosts => ["elastichost:443"]
      template_overwrite => true
      ssl => true
      index => "%{type}-%{[@metadata][subtype]}-%{+YYYY.MM.dd}"
      document_id => "%{fingerprint}"
    }
  } else {
   elastic {
      hosts => ["elastichost:443"]
      template_overwrite => true
      ssl => true
      index => "%{type}-%{+YYYY.MM.dd}"
      document_id => "%{fingerprint}"
    }
  }
}

OpenSearch/OpenDistro are AWS run products and differ from the original Elasticsearch and Kibana products that Elastic builds and maintains. You may need to contact them directly for further assistance.

(This is an automated response from your friendly Elastic bot. Please report this post if you have any suggestions or concerns :elasticheart: )

By default a fingerprint filter generates a fingerprint from the [message] field. A file input with a json codec does not generate a [message] field unless the JSON contains one. So the fingerprint filter will usually be a no-op and the sprintf will not get substituted, leading to _id being set to %{[fingerprint]}.

Perhaps you need the concatenate_all_fields option, or set the source to [event][original].

Thank you @badger for the update.

filter {
fingerprint {
method => "SHA1"
target => "fingerprint"
source => ["user_id", "siblings", "birthday"]
}
}
If user_id is used in source field. then should that field be present.
What I mean is in my case the json parse field name user_id should be present to calculate the HASH value.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.