Couchdb module not generating metrics

Hi, I'm having trouble getting metrics from CouchDB into ElasticCloud. System metrics are going ok. I have /etc/metricbeat/modules.d/couchdb.yml as follows:

- module: couchdb
  metricsets: ["server"]
  period: 10s
  hosts: ["127.0.0.1:5984/_node/_local/_stats"]
  username: admin
  password: "topsecret"

But when I run metricbeat test modules couchdb server I get:

couchdb...
  server...OK

without any actual json/values. testing the system module produces json fine.
I have tried amending the couchdb.yml file and when I remove the credentials the test command shows:

couchdb...
  server...
    error... ERROR error in http fetch: HTTP error 401 in : 401 Unauthorized

So I believe the credentials and endpoint are correct. I am also able to curl the endpoint:

curl http://admin:topsecret@127.0.0.1:5984/_node/_local/_stats

And that gives me the stats that metricbeat should need.
I'm stumped on what I am doing wrong though.

Hi @jonnymccullagh Welcome to the community.

Did you run the command
filebeat setup -e

Did you look at the filebeat logs?

You did not share your filebeat.yml so hard to help with that part.

Next think I would do is run filebeat with with either the -d "*" option which will show what is being published or even set the
output.console

and the events will be printed to the console...

Thanks for replying Stephen. I had not installed filebeat yet on the server as it was metricbeat I was trying to use. I installed filebeat now (via ansible) and ran filebeat setup -e

Exiting: error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/filebeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: %!w(<nil>). Response: {"statusCode":403,"error":"Forbidden","message":"Unable to bulk_create index-pattern"}

I tried:
filebeat -d "*" and it gave :

Exiting: data path already locked by another beat

So I stopped the service and tried again but it just hung.

Is there any way I can run metricbeat test modules couchdb server with more verbose output?

Sorry Filebeat / Metricbeat same general process :slight_smile:

Sorry for that distraction... I think you want metrics... so let's stick with metricbeat.

metricbeat for metrics
filebeat is for logs

You can not run metricbeat setup when metricbeat is already running... you can not run more than 1 metricbeat process at a time....

You will need to stop metricbeat

Clean Up the current indices

Run Setup

Then Start Metricbeat again ... that is when I would start

metricbeat -e -d "*"

Or using the Console Output.

No I think that is the most verbose test...

also share your metricbeat.yml will help

I stopped metricbeat service. I don't know what 'clean up current indices' means.
I ran: metricbeat setup -e, the output looked ok except for:

{"log.level":"error","@timestamp":"2022-12-20T14:59:41.197Z","log.origin":{"file.name":"instance/beat.go","file.line":1051},"message":"Exiting: error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/metricbeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: %!w(<nil>). Response: {\"statusCode\":403,\"error\":\"Forbidden\",\"message\":\"Unable to bulk_create index-pattern\"}","service.name":"metricbeat","ecs.version":"1.6.0"}
Exiting: error importing Kibana dashboards: fail to import the dashboards in Kibana: Error importing directory /usr/share/metricbeat/kibana: failed to import Kibana index pattern: 1 error: error loading index pattern: returned 403 to import file: Unable to bulk_create index-pattern: %!w(<nil>). Response: {"statusCode":403,"error":"Forbidden","message":"Unable to bulk_create index-pattern"}

I ran metricbeat -e -d "*" and could see debug output but I don't know what I am supposed to be looking for in that.

The metricbeat.yml is :

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
name: some-couchdb-server
fields_under_root: true
fields:
  env: legacy
  role: couchdb
setup.kibana:
cloud.id: REDACTED
cloud.auth: REDACTED
output.elasticsearch:
  hosts: ["localhost:9200"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
  - add_labels:
      labels:
        env: qa
        role: couchdb

In modules the couchdb.yml is:

- module: couchdb
  metricsets: ["server"]
  period: 10s
  hosts: ["10.0.0.1:5984/_node/_local/_stats"]
  username: admin
  password: REDACTED

Upon further digging I think I might be falling for this bug PR 26950
Lack of support and a solution for CouchDB version 3 was first reported back in July 2021 and the PR is being ignored.

@jonnymccullagh Yes I think you are correct! I see you pinged the issue... good.

I tried using metricbeat 8.5.3 Latest and I get a pretty bad error... different than yours
BTW I did not see the version of metricbeat you are using.

CouchDB 3.2.2 on MacOS

- module: couchdb
  metricsets: ["server"]
  period: 10s
  hosts: ["localhost:5984/_node/_local/_stats"]
  username: admin
  password: admin

Error

{"log.level":"error","@timestamp":"2022-12-20T12:56:45.434-0800","log.origin":{"file.name":"runtime/panic.go","file.line":220},"message":"recovered from panic while fetching 'couchdb/server' for host 'localhost:5984'. Recovering, but please report this.","service.name":"metricbeat","error":{"message":"runtime error: invalid memory address or nil pointer dereference"},"stack":"github.com/elastic/elastic-agent-libs/logp.Recover\n\tgithub.com/elastic/elastic-agent-libs@v0.2.11/logp/global.go:102\nruntime.gopanic\n\truntime/panic.go:838\nruntime.panicmem\n\truntime/panic.go:220\nruntime.sigpanic\n\truntime/signal_unix.go:818\ngithub.com/elastic/beats/v7/metricbeat/module/couchdb/server.(*MetricSet).Fetch\n\tgithub.com/elastic/beats/v7/metricbeat/module/couchdb/server/server.go:100\ngithub.com/elastic/beats/v7/metricbeat/mb/module.(*metricSetWrapper).fetch\n\tgithub.com/elastic/beats/v7/metricbeat/mb/module/wrapper.go:253\ngithub.com/elastic/beats/v7/metricbeat/mb/module.(*metricSetWrapper).startPeriodicFetching\n\tgithub.com/elastic/beats/v7/metricbeat/mb/module/wrapper.go:225\ngithub.com/elastic/beats/v7/metricbeat/mb/module.(*metricSetWrapper).run\n\tgithub.com/elastic/beats/v7/metricbeat/mb/module/wrapper.go:209\ngithub.com/elastic/beats/v7/metricbeat/mb/module.(*Wrapper).Start.func1\n\tgithub.com/elastic/beats/v7/metricbeat/mb/module/wrapper.go:149","ecs.version":"1.6.0"}

In the mean time I tried 2 other things

What seemed most promising was to enable the prometheus endpoint then you the metricebeat prometheus module I got it connected but it looks for me (on mac complete newby to couchdb) that the prometheus output is malformed ...

Here is my config

- module: prometheus
  period: 10s
  hosts: ["localhost:17986"]
  metrics_path: "/_node/_local/_prometheus"
  username: admin
  password: admin
  use_types: true
  rate_counters: true
  metrics_filters:
    exclude: ["couchdb_erlang_memory*"]

and so metricbeat complains / failed

{"log.level":"error","@timestamp":"2022-12-20T12:37:41.144-0800","log.origin":{"file.name":"module/wrapper.go","file.line":256},"message":"Error fetching data for metricset prometheus.collector: unable to decode response from prometheus endpoint: decoding of metric family failed: text format parsing error in line 451: second TYPE line for metric name \"couchdb_erlang_memory_bytes\", or TYPE reported after samples","service.name":"metricbeat","ecs.version":"1.6.0"}

If you hit the prometheus endpoint you can see a bunch of duplicated data at the end

# TYPE couchdb_erlang_memory_bytes gauge
couchdb_erlang_memory_bytes{memory_type="total"} 44529392
couchdb_erlang_memory_bytes{memory_type="processes"} 12211832
couchdb_erlang_memory_bytes{memory_type="processes_used"} 12209520
couchdb_erlang_memory_bytes{memory_type="system"} 32317560
couchdb_erlang_memory_bytes{memory_type="atom"} 631001
couchdb_erlang_memory_bytes{memory_type="atom_used"} 600131
couchdb_erlang_memory_bytes{memory_type="binary"} 294224
couchdb_erlang_memory_bytes{memory_type="code"} 10420983
couchdb_erlang_memory_bytes{memory_type="ets"} 2291256
# TYPE couchdb_erlang_gc_collections_total counter
couchdb_erlang_gc_collections_total 71603
# TYPE couchdb_erlang_gc_words_reclaimed_total counter
couchdb_erlang_gc_words_reclaimed_total 293462864
# TYPE couchdb_erlang_context_switches_total counter
couchdb_erlang_context_switches_total 240808
# TYPE couchdb_erlang_reductions_total counter
couchdb_erlang_reductions_total 381817103
# TYPE couchdb_erlang_processes gauge
couchdb_erlang_processes 381
# TYPE couchdb_erlang_process_limit gauge
couchdb_erlang_process_limit 262144
# TYPE couchdb_erlang_io_recv_bytes_total counter
couchdb_erlang_io_recv_bytes_total 28667
# TYPE couchdb_erlang_io_sent_bytes_total counter
couchdb_erlang_io_sent_bytes_total 1582972
# TYPE couchdb_erlang_message_queues gauge
couchdb_erlang_message_queues 0
# TYPE couchdb_erlang_message_queue_min gauge
couchdb_erlang_message_queue_min 0
# TYPE couchdb_erlang_message_queue_max gauge
couchdb_erlang_message_queue_max 0
# TYPE couchdb_erlang_scheduler_queues gauge
couchdb_erlang_scheduler_queues 0
# TYPE couchdb_erlang_dirty_cpu_scheduler_queues gauge
couchdb_erlang_dirty_cpu_scheduler_queues 0
# TYPE couchdb_erlang_memory_bytes gauge <!------ DUPLICATED here down 
couchdb_erlang_memory_bytes{memory_type="total"} 44532664
couchdb_erlang_memory_bytes{memory_type="processes"} 12214920
couchdb_erlang_memory_bytes{memory_type="processes_used"} 12212608
couchdb_erlang_memory_bytes{memory_type="system"} 32317744
couchdb_erlang_memory_bytes{memory_type="atom"} 631001
couchdb_erlang_memory_bytes{memory_type="atom_used"} 600131
couchdb_erlang_memory_bytes{memory_type="binary"} 294344
couchdb_erlang_memory_bytes{memory_type="code"} 10420983
couchdb_erlang_memory_bytes{memory_type="ets"} 2291256
# TYPE couchdb_erlang_gc_collections_total counter
couchdb_erlang_gc_collections_total 71605
# TYPE couchdb_erlang_gc_words_reclaimed_total counter
couchdb_erlang_gc_words_reclaimed_total 293516612
# TYPE couchdb_erlang_context_switches_total counter
couchdb_erlang_context_switches_total 240817
# TYPE couchdb_erlang_reductions_total counter
couchdb_erlang_reductions_total 381844280
# TYPE couchdb_erlang_processes gauge
couchdb_erlang_processes 381
# TYPE couchdb_erlang_process_limit gauge
couchdb_erlang_process_limit 262144
# TYPE couchdb_erlang_ets_table gauge
couchdb_erlang_ets_table 171

I also did try the HTTP endpoint which does create a VERY verbose output and I had to increase the max field limit in the template but it did work in the sense I could get the data in.. could probably filter some of the data out with a processor

- module: http
  metricsets: ["json"]
  enabled: true
  period: 10s
  hosts: ["localhost:5984"]
  path: "/_node/_local/_stats"
  namespace: "couchdb_namespace"
  method: "GET"
  username: admin
  password: admin

I thought I would just report back my findings... and perhaps a workaround

Thanks Stephen, currently using 8.4.2. Your http workaround sounds like it would work fine.
IRO I had to increase the max field limit in the template could I ask where you make that change?
I have an index template named 'metricbeat-8.4.2' where I upped the limit from 10000 to 20000 so it now reads:

{
  "index": {
    "lifecycle": {
      "name": "metricbeat"
    },
    "codec": "best_compression",
    "mapping": {
      "total_fields": {
        "limit": "20000"
      }
    },

On the couch server I was still getting metricbeat logs with:

{\"type\":\"mapper_parsing_exception\",\"reason\":\"failed to parse\",\"caused_by\":
{\"type\":\"illegal_argument_exception\",\"reason\":\"Limit of total fields [10000] 
has been exceeded while adding new fields [495]\"}}, dropping 
event!","service.name":"metricbeat","ecs.version":"1.6.0"}

Then also amended .ds-metricbeat-8.4.2-2022.12.21-000001 to 20000 but still getting the same error in metricbeat.
So I suppose I have 2 questions:

  • Are the fields from couch (possibly a few hundred) really pushing beyond this 10000 limit?
  • How can I increase the total fields limit?

Hi @jonnymccullagh Glad you / we have a workaround....

Perhaps a better Idea .. I tried this it worked ... get rid of the cruft... you could also thin it down more there are so many fields... I suspect you are using like 5% of them if that.

I used the drop_fields processor to drop all the the desc and type fields this worked fine without modding the field limit and makes the docs much more manageable... The field names are the description :slight_smile: '

Some reason dropping just the description did not lower it enough not sure why...

SO this works... without editing the field limits

- module: http
  metricsets: ["json"]
  enabled: true
  period: 10s
  hosts: ["localhost:5984"]
  path: "/_node/_local/_stats"
  namespace: "couchdb_namespace"
  method: "GET"
  username: admin
  password: admin
  processors:
    - drop_fields:
        fields: ["/desc$/", "/type$/"]

Here I dropped the desc / type and the dreyfuss stuff as an example

  processors:
    - drop_fields:
        fields: ["/desc$/", "/type$/", "/^http.couchdb_namespace.dreyfus/"]

Just to answer your question

I ran this..

PUT .ds-metricbeat-8.5.3-2022.12.21-000001/_settings
{
  "index": {
    "mapping": {
      "total_fields": {
        "limit": "20000"
      }
    }
  }
}

and without modifying the drops etc... that worked fine... and yes you would need to do it in the template as well... I think cleaning fields is probably better.

BTW CouchDB is not easy to kill on the Mac LOL!

Hey @jonnymccullagh Besides Above I found perhaps even a bettererer work around... I think... (not sure why I can't drop this... interesting :slight_smile: )

So what I did is I ran the official prometheus exporter for couchdb

Started it (I am sure you know a better way)

docker run --rm -p 9984:9984 gesellix/couchdb-prometheus-exporter --couchdb.uri=http://admin:admin@host.docker.internal:5984 --logtostderr

Then I used the metricbeat prometheus modules and got *very nicely formatted Metrics!

- module: prometheus
  period: 10s
  hosts: ["localhost:9984"]
  metrics_path: "/metrics"
  use_types: true
  rate_counters: true
  # username: admin
  # password: admin

Thanks for your persistence Stephen. I'm able now to graph those metrics:


I learned a lot going through this so thanks again.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.