MSSQL monitoring with Metricbeat?

Hi,

Note:
- Using ELK 7.9.2 in Debian 10
- MS SQL Server 2016 SP2 Standard in Windows Server 2012

I am trying to monitor my MS SQL Server with Elasticsearch and visualize the data in Kibana. After some research I found and tried Metricbeat, enabling the mssql module. The pre-loaded dashboards looks great. The issue with this is that only offers metrics/rates data, but I am looking for a more detailed monitoring like queries, users, time, logs, etc. I need to be able to check the transaction logs containing INSERT, UPDATE, DELETE, DROP rather than just the rate.

By enabling the mssql module I will get just rates. Then I found out that I can query the MS SQL server using the sql module. Enabled the module and after spending some time struggling with the config parameters, I was able to resolve the errors preventing Metricbeat process from start, but I am still unable to query my DB. Instead it looks like Metricbeat will try to query always the same DB mssqlsystemresource, ignoring the DB I am trying to set in the config file.

Below is the sql module config file. I uploaded a sample DB and for testing, this is just a very simple query to test connection and DB access

sql.yml

- module: sql
  metricsets:
    - query
  period: 10s
  #hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"]
  hosts: [sqlserver://localhost, user=server_name\User1', password='SECRET', dbname='BikeStores', sslmode='disable']
  driver: "mssql"
  sql_query: "SELECT COUNT(*) FROM fn_dblog(null,null)"
  sql_response_format: table

What could I be doing wrong?
Please help!

Thank you

Looking at this comment in the code for the sql Metricbeat module, I suspect it doesn't yet understand a SQL Server DSN. That said, I couldn't find a GitHub issue requesting such an enhancement either so I'm not sure if there are plans for the sql Metricbeat module to support SQL Server or not.

@Mario_Castro, perhaps you can shed some light here?

Yeah exactly, it doesn't support DSNs yet. I think you can achieve that if you use a full URL host with username, password and the database.

Hi @shaunak and @Mario_Castro,

I tried what you proposed. I changed the configuration file and restarted the Metricbeat service. The connection error is now gone, but I have a new one related to the sql module:

2020-10-21T10:12:12.594-0400 WARN [elasticsearch] elasticsearch/client.go:407 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbfdc2e4eab553298, ext:11136227801, loc:(*time.Location)(0x635de00)}, Meta:null, Fields:{"agent":{"ephemeral_id":"e4158c5c-7e42-4fda-b8ad-6680b39ff525","hostname":"DL86GCR5","id":"4c7ef22a-9c71-4951-b0bd-c5509042c6cc","name":"DL86GCR5","type":"metricbeat","version":"7.9.2"},"ecs":{"version":"1.5.0"},"event":{"dataset":"sql.query","duration":56037300,"module":"sql"},"host":{"architecture":"x86_64","hostname":"DL86GCR5","id":"64c7d252-9b2b-401d-9213-6f3a098cf4d3","ip":["192.168.1.9","fe80::202e:6406:fac:fd03","169.254.253.3","fe80::a0a2:829f:1015:e2cc","169.254.226.204","fe80::f4d8:4c3f:57a6:b5a9","169.254.181.169","fe80::ac5e:bb15:27e8:3d0a","169.254.61.10","fe80::7c96:7d34:d340:4803","169.254.72.3"],"mac":["6c:3b:e5:90:aa:94","68:17:29:65:e3:2f","68:17:29:65:e3:30","6a:17:29:65:e3:30","6a:17:29:65:e3:2f","68:17:29:65:e3:33"],"name":"DL86GCR5","os":{"build":"18362.1082","family":"windows","kernel":"10.0.18362.1082 (WinBuild.160101.0800)","name":"Windows 10 Pro for Workstations","platform":"windows","version":"10.0"}},"metricset":{"name":"query","period":10000},"service":{"address":"localhost","type":"sql"},"sql":{"driver":"mssql","metrics":{"numeric":{"":658.000000}},"query":"SELECT COUNT(*) FROM fn_dblog(null,null)"}}, Private:interface {}(nil), TimeSeries:true}, Flags:0x0, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_argument_exception","reason":"field name cannot be an empty string"}}

Error summary

"sql":{"driver":"mssql","metrics":{"numeric":{"":658.000000}},"query":"SELECT COUNT(*) FROM fn_dblog(null,null)"}}, Private:interface {}(nil), TimeSeries:true}, Flags:0x0, Cache:publisher.EventCache{m:common.MapStr(nil)}} (status=400): {"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"illegal_argument_exception","reason":"field name cannot be an empty string"

sql.yml (updated)

- module: sql
  metricsets:
    - query
  period: 10s
  #hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"]
  hosts: ["sqlserver://SA:123456aA@localhost"]
  dbname: "BikeStores"
  sslmode: "disable"
  driver: "mssql"
  sql_query: "SELECT COUNT(*) FROM fn_dblog(null,null)"
  sql_response_format: table

Update

I spent some time researching and trying different combinations. At the end it looks like the sql query is finally being executed without errors, as I don't see any in Metricbeat logs. The issue now is that not all the fields contained in the query are included in the index, when I check from Discovery. I was expecting at least a - when a field does not have a value to show.

sql.yml

- module: sql
  metricsets:
    - query
  period: 15s
  #hosts: ["user=myuser password=mypassword dbname=mydb sslmode=disable"]
  hosts: ["sqlserver://sa:123456aA@localhost"]
  dbname: "BikeStores"
  sslmode: "disable"
  driver: "mssql"
  sql_query: "
SELECT
 [Current LSN],
 [Transaction ID],
 [Operation],
 [Transaction Name],
 [CONTEXT],
 [AllocUnitName],
 [Page ID],
 [Slot ID],
 [Begin Time],
 [End Time],
 [Number of Locks],
 [Lock Information]
FROM sys.fn_dblog(NULL,NULL)
WHERE Operation IN 
   ('LOP_INSERT_ROWS','LOP_MODIFY_ROW',
    'LOP_DELETE_ROWS','LOP_BEGIN_XACT','LOP_COMMIT_XACT')"
  sql_response_format: table

Discovery (Kibana)

Original query in MSSQL server

This is by design. When a field is null it is omitted in that event. This saves space in elasticsearch and avoids having to "guess" the field type without maintaining a state. You don't want to store a - in a timestamp field or a null in a keyword which might cause to skew your analytics.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.