Need to ship confluent cloud logs and metrics

Hi team,

Need to ship confluent cloud logs/metrics to beats that are installed on a Kubernetes cluster and eventually to Elasticsearch.

Have read about Elasticsearch syn connector but can we first ship it to beats and then to Elasticsearch? I want the metrics and logs to be in ECS. Is this possible? If so what module should I use and can someone provide me with some documentation for reference?

Hi @sidharth_vijayakumar, how are those confluent cloud logs available? Is this a log file a beat can read? Can they be fetched via HTTP? Can they be sent via HTTP or TCP to somewhere else?

Filebeat supports ingesting logs in many different ways, I need to better understand how those logs are available to better guide you.

Have you looked at our existing inputs and modules:

Some of them support Filebeat to be running on a different host than the one producing the logs.

1 Like

We were thinking to use the below link to send confluent metrics and logs to beats.
But confluent is a cloud saas and beats would run as deployment in OpenShift cluster.

I think it's possible to get it via HTTP but can the HTTP module help as in the documentation it says. The host must have beats installed but it's not possible to install beats on the confluent cloud. We also thought to use sink connector of confluent but not sure if it could send metrics of confluent as I feel it can send details present in the kafka topic.

If the logs you want are available through the API, it looks compatible with the HTTP JSON input, it also supports a number of transformations on requests/responses.

Give it a try.

If the logs you want are available in Kafka, the ElasticsearService Sink Connector also looks feasable.

Hey @TiagoQueiroz ,
In metricbeat documentation, I cannot find the same HTTP JSON.

Is it possible to send to logs and metrics to beats deployed on Kubernetes and then to elastic from a SAAS?

Indeed, Metricbeat does not support it :confused: . I'll look if there some solution for metrics, but probably there isn't one at the moment.

1 Like

Am I missing something.. (probably) I'm certainly not a confluent cloud expert

Also looks like confluent cloud support a Prometheus endpoint

You should be able to use that ... That may make it easier.

1 Like

Thank you @stephenb !! For some reason I missed those modules.

@sidharth_vijayakumar would that work for you? Judging by the docs, they seem to work well for your case.

1 Like

Thanks, @stephenb and @TiagoQueiroz . The document says it's recommended to install beats on the server but in my case, it's not possible as it's a SAAS let me give this a try.

Spin up a small vm anywhere put metricbeat on it and run it metricbeat does not need to be on the same server that's producing the metrics just think of it as an agent.

It will need connectivity to that confluent endpoint and to the Elasticsearch you are sending the metrics to.

@stephenb yes. Instead of Vm, I will deploy metricbeat on OpenShift cluster as deployment and then try to capture the confluent matrix. Once that is done I can send it to Elasticsearch this is the plan that I have to tackle the solution.

When I tried to do this got this error any other things that i need to enable to ensure that this works properly?

2022-03-11T03:46:25.067Z INFO instance/beat.go:442 metricbeat stopped.
2022-03-11T03:46:25.067Z ERROR instance/beat.go:989 Exiting: no metricsets configured for module 'http'
Exiting: no metricsets configured for module 'http'

At this point you need to share your configs, otherwise we'll just be guessing. Please share your metricbeats configs

Plus I often test configds on a simple VM before trying to deploy them K8s Just to reduce the variables check connectivity, etc.

Either way, share your configs

I am using elastic helm charts to deploy metricbeat on k8s.

Please find the configs below :

kind: ConfigMap
apiVersion: v1
  name: elastic-metricbeat-deployment-config
  namespace: dev
  metricbeat.yml: |
    - module: http
      period: 10s
      hosts: ["${CONFLUENT_HTTP_ENDPOINT}"]
      # This can be used for service account based authorization:
      hosts: ["${ELASTIC_HOST}"]
      api_key: ${API_KEY_METRIC}                  
      protocol: https
    fields_under_root: true
      cluster_name: ${OS_ENVIRONMENT}
      dataset_name: ${OS_DATASET}
        rollover_alias: "test"
        type: component
          index.number_of_shards: 2        
      enabled: true

As the error says you're missing the metric set You need to add

metricsets: ["json"]

And / or

metricsets: ["server"]

Also, if you're just getting started, I would not change anything like the rollover alias. You're just going to cause yourself trouble untill you understand how all that works. I would use as many as the defaults as possible.

Also, I still think the confluent Prometheus endpoint is probably a better way to go.

1 Like

Hey @stephenb & @TiagoQueiroz

Can someone tell me what is namespace in this documentation? I am not able to figure out what it means or how to find the right value for namespace

The JSON structure returned by the HTTP endpoint will be added to the provided namespace field as shown in the following example:

1 Like

Thanks, i was able to configure it finally.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.