Could not create connection to db

Hi there

I was triying to connect to a DB MSSQL from metricbeat, but I have 2 errors.

Can you help me?

Greetings

Hi @Jose_Campos -

  • Is this a new problem?
  • Does this error happen when you start Metricbeat? or, is this an intermittent issue?
  • Is Metricbeat running in the same host as your database?
  • Which version of Metricbeat are you using?
  • Provide the debug logs, e.g. :metricbeat -e -d "*"
  • Provide the Metricbeat configuration file.

Thank you.

Hi there

Answering to all your questions.

  • Is the first time that I installed it.
  • This error happens when I start Metricbeat
  • Yes, in fact I am using localhost as sqlserver
  • The version is 7.6.0
  • my metric beat file
  • my mssql module file
- module: mssql
  metricsets:
    - "transaction_log"
    - "performance"
  hosts: ["sqlserver//:userdb:passworddb@localhost"]


2020-03-04T16:20:06.842-0600    INFO    [beat]  instance/beat.go:958    Beat info       {"system_info": {"beat": {"path
: {"config": "C:\\Program Files\\Metricbeat", "data": "C:\\Program Files\\Metricbeat\\data", "home": "C:\\Program Files
\Metricbeat", "logs": "C:\\Program Files\\Metricbeat\\logs"}, "type": "metricbeat", "uuid": "a815636d-341c-4606-a15d-16
ee15c0d9b"}}}
2020-03-04T16:20:06.854-0600    INFO    [beat]  instance/beat.go:967    Build info      {"system_info": {"build": {"com
it": "6a23e8f8f30f5001ba344e4e54d8d9cb82cb107c", "libbeat": "7.6.0", "time": "2020-02-05T23:13:00.000Z", "version": "7.
.0"}}}
2020-03-04T16:20:06.863-0600    INFO    [beat]  instance/beat.go:970    Go runtime info {"system_info": {"go": {"os":"w
ndows","arch":"386","max_procs":2,"version":"go1.13.7"}}}
2020-03-04T16:20:06.934-0600    INFO    [beat]  instance/beat.go:974    Host info       {"system_info": {"host": {"archtecture":"x86","boot_time":"2020-01-15T03:19:18.29-06:00","name":"cs-adelantos","ip":["/64","192..x.x.x/24","::1/128","127.0.0.1/8","fe80::5efe:c0a8:1d3/128","fe80::100:7f:fffe/64"],"kernel_version":"6.1.7601.24 45 (win7sp1_ldr_escrow.200102-1707)","mac":["00:0c:29:36:bd:15","00:00:00:00:00:00:00:e0","00:00:00:00:00:00:00:e0"],"o
":{"family":"windows","platform":"windows","name":"Windows 7 Professional","version":"6.1","major":1,"minor":0,"patch":
,"build":"7601.24544"},"timezone":"CST","timezone_offset_sec":-21600,"id":"05c89904-84b6-4242-a599-e0c564c19904"}}}
2020-03-04T16:20:06.952-0600    INFO    [beat]  instance/beat.go:1003   Process info    {"system_info": {"process": {"c
d": "C:\\Program Files\\Metricbeat", "exe": "C:\\Program Files\\Metricbeat\\metricbeat.exe", "name": "metricbeat.exe",
pid": 3960, "ppid": 3920, "start_time": "2020-03-04T16:20:02.787-0600"}}}
2020-03-04T16:20:06.959-0600    INFO    instance/beat.go:298    Setup Beat: metricbeat; Version: 7.6.0
2020-03-04T16:20:06.965-0600    DEBUG   [beat]  instance/beat.go:324    Initializing output plugins
2020-03-04T16:20:06.971-0600    INFO    [index-management]      idxmgmt/std.go:182      Set output.elasticsearch.index
o 'metricbeat-7.6.0' as ILM is enabled.
2020-03-04T16:20:06.986-0600    INFO    elasticsearch/client.go:174     Elasticsearch url: http://192.x.x.x:9200
2020-03-04T16:20:06.992-0600    DEBUG   [publisher]     pipeline/consumer.go:137        start pipeline event consumer
2020-03-04T16:20:06.997-0600    INFO    [publisher]     pipeline/module.go:110  Beat name: cs-adelantos
2020-03-04T16:20:07.051-0600    DEBUG   [modules]       beater/metricbeat.go:155        Available modules and metricset
: Register [ModuleFactory:[aws, azure, beat, docker, elasticsearch, kibana, logstash, mongodb, mssql, mysql, oracle, po
tgresql, system, uwsgi, windows], MetricSetFactory:[aerospike/namespace, apache/status, appsearch/stats, aws/cloudwatch
 aws/ec2, aws/rds, aws/s3_daily_storage, aws/s3_request, aws/sqs, azure/compute_vm, azure/compute_vm_scaleset, azure/mo
itor, azure/storage, beat/state, beat/stats, ceph/cluster_disk, ceph/cluster_health, ceph/cluster_status, ceph/monitor_
ealth, ceph/osd_df, ceph/osd_tree, ceph/pool_disk, consul/agent, coredns/stats, couchbase/bucket, couchbase/cluster, co
chbase/node, couchdb/server, docker/container, docker/cpu, docker/diskio, docker/event, docker/healthcheck, docker/imag
, docker/info, docker/memory, docker/network, dropwizard/collector, elasticsearch/ccr, elasticsearch/cluster_stats, ela
ticsearch/enrich, elasticsearch/index, elasticsearch/index_recovery, elasticsearch/index_summary, elasticsearch/ml_job,
elasticsearch/node, elasticsearch/node_stats, elasticsearch/pending_tasks, elasticsearch/shard, envoyproxy/server, etcd
leader, etcd/metrics, etcd/self, etcd/store, golang/expvar, golang/heap, googlecloud/stackdriver, graphite/server, hapr
xy/info, haproxy/stat, http/json, http/server, jolokia/jmx, kafka/consumergroup, kafka/partition, kibana/stats, kibana/
tatus, kubernetes/apiserver, kubernetes/container, kubernetes/controllermanager, kubernetes/event, kubernetes/node, kub
rnetes/pod, kubernetes/proxy, kubernetes/scheduler, kubernetes/state_container, kubernetes/state_cronjob, kubernetes/st
te_deployment, kubernetes/state_node, kubernetes/state_persistentvolume, kubernetes/state_persistentvolumeclaim, kubern
tes/state_pod, kubernetes/state_replicaset, kubernetes/state_resourcequota, kubernetes/state_service, kubernetes/state_
tatefulset, kubernetes/system, kubernetes/volume, kvm/dommemstat, logstash/node, logstash/node_stats, memcached/stats,
ongodb/collstats, mongodb/dbstats, mongodb/metrics, mongodb/replstatus, mongodb/status, mssql/performance, mssql/transa
tion_log, munin/node, mysql/galera_status, mysql/status, nats/connections, nats/routes, nats/stats, nats/subscriptions,
nginx/stubstatus, oracle/performance, oracle/tablespace, php_fpm/pool, php_fpm/process, postgresql/activity, postgresql
bgwriter, postgresql/database, postgresql/statement, prometheus/collector, rabbitmq/connection, rabbitmq/exchange, rabb
tmq/node, rabbitmq/queue, redis/info, redis/key, redis/keyspace, sql/query, stan/channels, stan/stats, stan/subscriptio
s, statsd/server, system/core, system/cpu, system/diskio, system/filesystem, system/fsstat, system/memory, system/netwo
k, system/network_summary, system/process, system/process_summary, system/raid, system/service, system/socket_summary,
ystem/uptime, traefik/health, uwsgi/status, vsphere/datastore, vsphere/host, vsphere/virtualmachine, windows/perfmon, w
ndows/service, zookeeper/connection, zookeeper/mntr, zookeeper/server], LightModules:[LightModules:[aws/elb, aws/ebs, a
s/usage, aws/billing, aws/sns, cockroachdb/status, googlecloud/compute, activemq/queue, activemq/topic, activemq/broker
 kafka/broker, kafka/producer, kafka/consumer, tomcat/threading, tomcat/memory, tomcat/requests, tomcat/cache]]]
2020-03-04T16:20:07.113-0600    INFO    instance/beat.go:439    metricbeat start running.
2020-03-04T16:20:07.119-0600    DEBUG   [cfgfile]       cfgfile/reload.go:133   Checking module configs from: C:\Progra
 Files\Metricbeat/modules.d/*.yml
2020-03-04T16:20:07.113-0600    DEBUG   [service]       service/service_windows.go:72   Windows is interactive: true
2020-03-04T16:20:07.113-0600    INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2020-03-04T16:20:07.129-0600    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: C:\Program Files
Metricbeat\modules.d\mssql.yml
2020-03-04T16:20:07.152-0600    DEBUG   [cfgfile]       cfgfile/cfgfile.go:193  Load config from file: C:\Program Files
Metricbeat\modules.d\system.yml
2020-03-04T16:20:07.162-0600    DEBUG   [cfgfile]       cfgfile/reload.go:147   Number of module configs found: 4
2020-03-04T16:20:07.165-0600    WARN    [cfgwarn]       transaction_log/transaction_log.go:58   BETA: The mssql transacion_log metricset is beta.
2020-03-04T16:20:09.759-0600    WARN    [cfgwarn]       performance/performance.go:52   BETA: The mssql performance met
icset is beta.
2020-03-04T16:20:12.369-0600    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metric
": {"beat":{"cpu":{"system":{"ticks":249,"time":{"ms":249}},"total":{"ticks":467,"time":{"ms":467},"value":467},"user":
"ticks":218,"time":{"ms":218}}},"handles":{"open":149},"info":{"ephemeral_id":"ceaf4a2d-8919-46b4-a666-9e66a7dc8bcd","u
time":{"ms":9478}},"memstats":{"gc_next":11046160,"memory_alloc":8086064,"memory_total":15032608,"rss":35946496},"runti
e":{"goroutines":23}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clien
s":0,"events":{"active":0}}},"system":{"cpu":{"cores":2}}}}}
2020-03-04T16:20:12.386-0600    INFO    [monitoring]    log/log.go:154  Uptime: 9.5005434s
2020-03-04T16:20:12.391-0600    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2020-03-04T16:20:12.399-0600    INFO    instance/beat.go:445    metricbeat stopped.
2020-03-04T16:20:12.406-0600    ERROR   instance/beat.go:933    Exiting: 2 errors: could not create connection to db: e
ror doing ping to db: Unable to get instances from Sql Server Browser on host sqlserver: dial udp: lookup sqlserver: no
such host; could not create connection to db: error doing ping to db: Unable to get instances from Sql Server Browser o
 host sqlserver: dial udp: lookup sqlserver: no such host
Exiting: 2 errors: could not create connection to db: error doing ping to db: Unable to get instances from Sql Server B
owser on host sqlserver: dial udp: lookup sqlserver: no such host; could not create connection to db: error doing ping
o db: Unable to get instances from Sql Server Browser on host sqlserver: dial udp: lookup sqlserver: no such host

#==========================  Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.x.x.x:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.x.x.x:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

The username and password must be that of the master database?.

If so, is there any way to encrypt that password, so that it is not seen in the configuration file?

Thanks for all.

Greetings

Not sure if this will help, but could you try separate username and password into separate parameters? Like:

- module: mssql
  period: 10s
  metricsets:
    - "transaction_log"
    - "performance"
  hosts: ["sqlserver//localhost"]
  username: userdb
  password: passworddb

I believe you can use environment variables for username and password so it won't show up in the config file. For example:

- module: mssql
  period: 10s
  metricsets:
    - "transaction_log"
    - "performance"
  hosts: ["sqlserver//localhost"]
  username: '${USERNAME:""}'
  password: '${PASSWORD:""}'

Hi there Kaiyan

Didn't work, do you think that is a DB issue?

I tried with telnet and I had response

Greetings

Hi there Kaiyan

Is this a keystore?

Greetings

@Mario_Castro Do you know more detail on this?

Hi @Jose_Campos :slight_smile:

Can you check setting the host to sqlservertest to see if it's taking the host from the wrong place?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.