How to identify source of the PCF logs in elastic stack

Hi, I am trying to implement Elastic Stack for PCF environment where I have to two datacenters SCC, GCC .
How to identify whether the log is from SCC or GCC

What is PCF? If you saw a log event today, how would you tell which datacenter it came from?

Hi @gpvikas145 Welcome to the community. Assuming you want to use Filebeat to ship App and component logs from PCF to Elasticsearch.. .is that what you want to do?

Simply add a filed(s) in the filebeat.yml that you deploy to each foundation

processors:
  - add_fields:
      target: 'cloudfoundry'
      fields:
        foundation.name: 'elastic-dev-foundation-v1'
        foundation.datacenter: 'data-center-west'

@rugenl PCF is Pivotal Cloud Foundry

@gpvikas145 here is a sample filebeat.yml... it uses Elastic Cloud, you can switch out to Normal Elasticsearch endpoint.

You should also run filebeat setup -e from a local dev box to setup template / index etc.. etc..

#=========================== Filebeat inputs =============================

# Configure the input to access loggregator to forward the log events.
filebeat.inputs:
  - &cloudfoundry
    type: cloudfoundry
    api_address: "https://api.system.my-pcf-domain.net"
    client_id: "beatsclient"
    client_secret: "mysecret"
    # IMPORTANT if you run multiple filebeats... 
    shard_id: "filebeat-7-17.4-v1-subscrption-id-99999"
    ssl:
      verification_mode: none

#================================ Outputs =====================================

# Configure the Elasticsearch output either to a specific host or using
# Elastic Cloud.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["${ELASTICSEARCH_OUTPUT}:9200"]


cloud.id : "my-cluuster:asdfsadfsadfasdfasdfasdfMDViOTRjNDA4NGMzNmNiM2JkNzRjNDY3JGIxZTUyOWEwNTBkNjRkODZhMzIxZTBhMjU3YjRlODhh"
cloud.auth : "elastic:asdfsadfasdfasdf"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  # # DO NOT USE ON NEW 2.8+ TAS
  # - add_cloudfoundry_metadata:
  #     <<: *cloudfoundry
      # cache_duration: 600
  - add_fields:
      target: 'cloudfoundry'
      fields:
        test.name: 'filebeat-v1-wo-mp'
        foundation.name: 'elastic-dev-foundation-v1'
        foundation.datacenter: 'datacenter-west'

monitoring:
  enabled: true

  cloud.id : "metrics-default:dsfgsdfgsdfgmVzLmlvJDQxNDFlZTVhNDMwYzQwYjFiMTg4ZGQzYzhjZGMxZjlmJDE5MmJmOGJiZjA2OTQxN2ViZWRmYTA3OTc5Zjc5MmZh"
  cloud.auth: "elastic:sdgdsfgsdfg8UX6D9o"

# Approach A : Tuning Parameters
queue.mem:
  events: 4096
  flush.min_events: 2048
  flush.timeout: 1s

output.elasticsearch:
  bulk_max_size: 200
  worker: 4
  # If you want
  # pipeline: cloudfoundry

# Approach B : Tuning Parameters
# queue.mem:
#   flush.timeout: 0s

# output.elasticsearch:
#   bulk_max_size: 0

# For when we want more than 1 shard
# setup.template.enabled: false