Mysql filebeat module [mysql.error] no result found

Hi ,

I using filebeat to send mysql data to elasticsearch, but i notice there are no result for mysql.error data in elasticsearch even though mysql.slowlog data are coming correctly. I check the data using kibana discover

i'm using filebeat 7.4 and elasticsearch 7.4

filebeat module conf:

# cat /etc/filebeat/modules.d/mysql.yml
# Module: mysql
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.4/filebeat-module-mysql.html

- module: mysql
  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/mariadb/mariadb.log"]

  # Slow logs
  slowlog:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/mariadb/mariadb-slow.log"]

I have check filebeat stdout using command: filebeat -e -d "*" ... no error found. The stdout show that the mysql.error and mysql.slowlog data has been posted to elasticsearch. But still there is nothing from elasticsearch. The only data appear in elasticsearch is mysql.slowlog.

filebeat -e -d "*" output:

2019-10-14T02:19:23.785+0800    DEBUG   [modules]       fileset/pipelines.go:67 Required processors: []
2019-10-14T02:19:23.785+0800    DEBUG   [elasticsearch] elasticsearch/client.go:761     GET http://localhost:9200/_ingest/pipeline/filebeat-7.4.0-mysql-error-pipeline  <nil>
2019-10-14T02:19:23.786+0800    DEBUG   [modules]       fileset/pipelines.go:120        Pipeline filebeat-7.4.0-mysql-error-pipeline already loaded
2019-10-14T02:19:23.786+0800    DEBUG   [modules]       fileset/pipelines.go:67 Required processors: []
2019-10-14T02:19:23.786+0800    DEBUG   [elasticsearch] elasticsearch/client.go:761     GET http://localhost:9200/_ingest/pipeline/filebeat-7.4.0-mysql-slowlog-pipeline  <nil>
2019-10-14T02:19:23.788+0800    DEBUG   [modules]       fileset/pipelines.go:120        Pipeline filebeat-7.4.0-mysql-slowlog-pipeline already loaded
2019-10-14T02:19:23.788+0800    INFO    input/input.go:114      Starting input of type: log; ID: 1124530994825458434
2019-10-14T02:19:23.788+0800    INFO    input/input.go:114      Starting input of type: log; ID: 7616194281951100578
2019-10-14T02:19:23.788+0800    DEBUG   [input] log/input.go:191        Start next scan
2019-10-14T02:19:23.788+0800    DEBUG   [input] log/input.go:191        Start next scan
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:421        Check file for harvesting: /var/log/mariadb/mariadb.log
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:511        Update existing file for harvesting: /var/log/mariadb/mariadb.log, offset: 411223
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:520        Resuming harvesting of file: /var/log/mariadb/mariadb.log, offset: 411223, new size: 412093
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:421        Check file for harvesting: /var/log/mariadb/mariadb-slow.log
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:511        Update existing file for harvesting: /var/log/mariadb/mariadb-slow.log, offset: 4600
2019-10-14T02:19:23.789+0800    DEBUG   [input] log/input.go:520        Resuming harvesting of file: /var/log/mariadb/mariadb-slow.log, offset: 4600, new size: 4861
2019-10-14T02:19:23.789+0800    DEBUG   [harvester]     log/harvester.go:489    Set previous offset for file: /var/log/mariadb/mariadb.log. Offset: 411223
2019-10-14T02:19:23.789+0800    DEBUG   [harvester]     log/harvester.go:480    Setting offset for file: /var/log/mariadb/mariadb.log. Offset: 411223
2019-10-14T02:19:23.789+0800    DEBUG   [harvester]     log/harvester.go:182    Harvester setup successful. Line terminator: 1
2019-10-14T02:19:23.790+0800    DEBUG   [publisher]     pipeline/client.go:220  Pipeline client receives callback 'onFilteredOut' for event: {Timestamp:0001-01-01 00:00:00 +0000 UTC Meta:null Fields:null Private:{Id: Finished:false Fileinfo:0xc0007564e0 Source:/var/log/mariadb/mariadb.log Offset:411223 Timestamp:2019-10-14 02:19:23.78908156 +0800 +08 m=+3.366934399 TTL:-1ns Type:log Meta:map[] FileStateOS:415273452-64768} TimeSeries:false}    

i also tried to post the mysql.error data manually using curl to elasticsearch, found out that the data coming out correctly

# curl -u elastic:elasticpwd -H 'Content-Type: application/json' -X POST "localhost:9200/filebeat-7.4.0-2019.10.11-000001/_doc/" -d '{"@timestamp":"2019-10-13T00:18:35.510Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"input":{"type":"log"},"message":"191013  8:18:32\t     8 Query\tSHOW /*!50002 GLOBAL */ STATUS","event":{"dataset":"mysql.error","module":"mysql"},"fileset":{"name":"error"},"agent":{"type":"filebeat","ephemeral_id":"16835ee0-9321-476a-802b-c61f0640c370","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0"},"ecs":{"version":"1.1.0"},"log":{"file":{"path":"/var/log/mariadb/mariadb.log"},"offset":11778},"service":{"type":"mysql"},"host":{"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","name":"nttdisbdb02","containerized":false,"hostname":"nttdisbdb02","architecture":"x86_64","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"}}}'
{"_index":"filebeat-7.4.0-2019.10.11-000001","_type":"_doc","_id":"BDTTyW0BJGi3-aJ4nxKF","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":7348502,"_primary_term":1}

i check the mysql.error data in elasticsearch the data came out normally as expected.

# curl -u elastic:elasticpwd -X GET "nttdisblb01:9200/filebeat-7.4.0-2019.10.11-000001/_source/BDTTyW0BJGi3-aJ4nxKF"
{"@timestamp":"2019-10-13T00:18:35.510Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"input":{"type":"log"},"message":"191013  8:18:32\t     8 Query\tSHOW /*!50002 GLOBAL */ STATUS","event":{"dataset":"mysql.error","module":"mysql"},"fileset":{"name":"error"},"agent":{"type":"filebeat","ephemeral_id":"16835ee0-9321-476a-802b-c61f0640c370","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0"},"ecs":{"version":"1.1.0"},"log":{"file":{"path":"/var/log/mariadb/mariadb.log"},"offset":11778},"service":{"type":"mysql"},"host":{"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","name":"nttdisbdb02","containerized":false,"hostname":"nttdisbdb02","architecture":"x86_64","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"}}}

Hi @Admiraludon, welcome to the Elastic community forums!

I guess the first thing to check would be whether there are any error logs in /var/log/mariadb/mariadb.log. I'm assuming there are but I figured it doesn't hurt to ask. :slight_smile:

For debugging this issue and narrowing down the problem a bit more, could you temporarily make the following changes?

  1. In your modules.d/mysql.yml could you temporarily set enabled: false for the slowlog fileset? This will let us focus exclusively on mysql.error logs.

  2. If you have any other modules enabled, please disable them as well.

  3. In your filebeat.yml, could you temporarily set enabled: false for output.elasticsearch and enable output.console instead? This will cause Filebeat to print the events to the console. We can then inspect them to see if any mysql.error events are being generated or not.

Then please re-run filebeat -e.

Thanks,

Shaunak

Hi @shaunak, :slight_smile:

As you guess it, i have checked the /var/log/mariadb/mariadb.log . I'm pretty sure that filebeat have read data from both mariadb.log and mariadb-slow.log. In fact the curl body (in curl sample i stated before) were fetched from filebeat -e -d " *" output.

I have done as per your suggestion, below are the output of filebeat

2019-10-15T09:51:04.282+0800	INFO	instance/beat.go:607	Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2019-10-15T09:51:04.282+0800	INFO	instance/beat.go:615	Beat ID: 97520869-259a-4d60-bb46-5eecf412f37d
2019-10-15T09:51:04.290+0800	INFO	[seccomp]	seccomp/seccomp.go:124	Syscall filter successfully installed
2019-10-15T09:51:04.290+0800	INFO	[beat]	instance/beat.go:903	Beat info	{"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "97520869-259a-4d60-bb46-5eecf412f37d"}}}
2019-10-15T09:51:04.290+0800	INFO	[beat]	instance/beat.go:912	Build info	{"system_info": {"build": {"commit": "f940c36884d3749901a9c99bea5463a6030cdd9c", "libbeat": "7.4.0", "time": "2019-09-27T07:45:44.000Z", "version": "7.4.0"}}}
2019-10-15T09:51:04.290+0800	INFO	[beat]	instance/beat.go:915	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":32,"version":"go1.12.9"}}}
2019-10-15T09:51:04.297+0800	INFO	[beat]	instance/beat.go:919	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-05-30T11:44:24+08:00","containerized":false,"name":"nttdisbdb02","ip":["127.0.0.1/8","::1/128","11.0.0.4/24","fe80::ef9:50d9:4bb:c8c/64","192.168.122.1/24","172.17.0.1/16","10.244.4.0/32","fe80::cce5:afff:feba:3de8/64"],"kernel_version":"3.10.0-862.3.2.el7.x86_64","mac":["80:18:44:f1:7c:84","80:18:44:f1:7c:85","80:18:44:f1:7c:86","80:18:44:f1:7c:87","52:54:00:8b:f7:da","52:54:00:8b:f7:da","02:42:48:7a:f4:dd","ce:e5:af:ba:3d:e8"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"+08","timezone_offset_sec":28800,"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3"}}}
2019-10-15T09:51:04.299+0800	INFO	[beat]	instance/beat.go:948	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/home/udocker", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 123289, "ppid": 96343, "seccomp": {"mode":"filter"}, "start_time": "2019-10-15T09:51:04.020+0800"}}}
2019-10-15T09:51:04.299+0800	INFO	instance/beat.go:292	Setup Beat: filebeat; Version: 7.4.0
2019-10-15T09:51:04.300+0800	INFO	[publisher]	pipeline/module.go:97	Beat name: nttdisbdb02
2019-10-15T09:51:04.301+0800	WARN	beater/filebeat.go:152	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-10-15T09:51:04.302+0800	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s
2019-10-15T09:51:04.302+0800	INFO	instance/beat.go:422	filebeat start running.
2019-10-15T09:51:04.303+0800	INFO	registrar/registrar.go:145	Loading registrar data from /var/lib/filebeat/registry/filebeat/data.json
2019-10-15T09:51:04.304+0800	INFO	registrar/registrar.go:152	States Loaded from registrar: 56
2019-10-15T09:51:04.304+0800	WARN	beater/filebeat.go:368	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-10-15T09:51:04.305+0800	INFO	crawler/crawler.go:72	Loading Inputs: 1
2019-10-15T09:51:04.308+0800	INFO	log/input.go:152	Configured paths: [/var/log/mariadb/mariadb.log]
2019-10-15T09:51:04.309+0800	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 0
2019-10-15T09:51:04.309+0800	INFO	cfgfile/reload.go:171	Config reloader started
2019-10-15T09:51:04.313+0800	INFO	log/input.go:152	Configured paths: [/var/log/mariadb/mariadb.log]
2019-10-15T09:51:04.313+0800	INFO	input/input.go:114	Starting input of type: log; ID: 1124530994825458434
2019-10-15T09:51:04.313+0800	INFO	cfgfile/reload.go:226	Loading of config files completed.
2019-10-15T09:51:04.315+0800	INFO	log/harvester.go:251	Harvester started for file: /var/log/mariadb/mariadb.log
2019-10-15T09:51:07.292+0800	INFO	add_cloud_metadata/add_cloud_metadata.go:87	add_cloud_metadata: hosting provider type not detected.

continue from last log:

{"@timestamp":"2019-10-15T01:46:59.329Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-elasticsearch-gc-pipeline"},"message":"[2019-10-15T01:46:42.908+0000][44923][safepoint      ] Safepoint \"RevokeBias\", Time since last: 2105573 ns, Reaching safepoint: 684073 ns, At safepoint: 72230 ns, Total: 756303 ns","agent":{"hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0","type":"filebeat","ephemeral_id":"aa855b7a-cb54-4d07-9987-0e2361c34fbc"},"host":{"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false,"hostname":"nttdisbdb02","name":"nttdisbdb02","architecture":"x86_64","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"}},"ecs":{"version":"1.1.0"},"log":{"offset":43826113,"file":{"path":"/var/log/elasticsearch/gc.log.16"}},"fileset":{"name":"gc"},"service":{"type":"elasticsearch"},"input":{"type":"log"},"event":{"dataset":"elasticsearch.gc","module":"elasticsearch"}}
{"@timestamp":"2019-10-15T01:46:59.331Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"event":{"module":"mysql","dataset":"mysql.error"},"fileset":{"name":"error"},"agent":{"ephemeral_id":"aa855b7a-cb54-4d07-9987-0e2361c34fbc","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0","type":"filebeat"},"message":"191015  9:46:45\t    13 Query\tSHOW /*!50002 GLOBAL */ STATUS","log":{"offset":1092298,"file":{"path":"/var/log/mariadb/mariadb.log"}},"service":{"type":"mysql"},"input":{"type":"log"},"ecs":{"version":"1.1.0"},"host":{"hostname":"nttdisbdb02","architecture":"x86_64","name":"nttdisbdb02","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"},"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false}}
{"@timestamp":"2019-10-15T01:46:59.329Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-elasticsearch-gc-pipeline"},"message":"[2019-10-15T01:46:43.909+0000][44923][safepoint      ] Safepoint \"Cleanup\", Time since last: 1000239008 ns, Reaching safepoint: 847218 ns, At safepoint: 48527 ns, Total: 895745 ns","service":{"type":"elasticsearch"},"input":{"type":"log"},"event":{"dataset":"elasticsearch.gc","module":"elasticsearch"},"ecs":{"version":"1.1.0"},"log":{"offset":43826293,"file":{"path":"/var/log/elasticsearch/gc.log.16"}},"fileset":{"name":"gc"},"host":{"architecture":"x86_64","os":{"platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core"},"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false,"hostname":"nttdisbdb02","name":"nttdisbdb02"},"agent":{"ephemeral_id":"aa855b7a-cb54-4d07-9987-0e2361c34fbc","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0","type":"filebeat"}}
{"@timestamp":"2019-10-15T01:46:59.329Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-elasticsearch-gc-pipeline"},"message":"[2019-10-15T01:46:44.474+0000][44923][safepoint      ] Safepoint \"RevokeBias\", Time since last: 564554853 ns, Reaching safepoint: 687039 ns, At safepoint: 73769 ns, Total: 760808 ns","fileset":{"name":"gc"},"service":{"type":"elasticsearch"},"input":{"type":"log"},"event":{"module":"elasticsearch","dataset":"elasticsearch.gc"},"agent":{"type":"filebeat","ephemeral_id":"aa855b7a-cb54-4d07-9987-0e2361c34fbc","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0"},"log":{"file":{"path":"/var/log/elasticsearch/gc.log.16"},"offset":43826473},"ecs":{"version":"1.1.0"},"host":{"name":"nttdisbdb02","hostname":"nttdisbdb02","architecture":"x86_64","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"},"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false}}

...
...
...

{"@timestamp":"2019-10-15T02:02:43.598Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"fileset":{"name":"error"},"host":{"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false,"name":"nttdisbdb02","hostname":"nttdisbdb02","architecture":"x86_64","os":{"platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core"}},"event":{"module":"mysql","dataset":"mysql.error"},"ecs":{"version":"1.1.0"},"agent":{"version":"7.4.0","type":"filebeat","ephemeral_id":"63c540f6-fc7f-4395-a773-191167472208","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d"},"log":{"offset":1099471,"file":{"path":"/var/log/mariadb/mariadb.log"}},"message":"\t\t    35 Query\tSIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'hi from the error log'","service":{"type":"mysql"},"input":{"type":"log"}}
{"@timestamp":"2019-10-15T02:02:43.598Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"ecs":{"version":"1.1.0"},"host":{"containerized":false,"hostname":"nttdisbdb02","architecture":"x86_64","os":{"kernel":"3.10.0-862.3.2.el7.x86_64","codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux"},"name":"nttdisbdb02","id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3"},"log":{"offset":1099553,"file":{"path":"/var/log/mariadb/mariadb.log"}},"fileset":{"name":"error"},"input":{"type":"log"},"agent":{"ephemeral_id":"63c540f6-fc7f-4395-a773-191167472208","hostname":"nttdisbdb02","id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0","type":"filebeat"},"message":"2019-10-15 10:02:40 35 [Note] Event Scheduler: Dropping test.test_error_log","service":{"type":"mysql"},"event":{"module":"mysql","dataset":"mysql.error"}}
{"@timestamp":"2019-10-15T02:02:43.598Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.4.0","pipeline":"filebeat-7.4.0-mysql-error-pipeline"},"message":"2019-10-15 10:02:40 35 [ERROR] Event Scheduler: [root@localhost][test.test_error_log] hi from the error log","service":{"type":"mysql"},"input":{"type":"log"},"event":{"module":"mysql","dataset":"mysql.error"},"fileset":{"name":"error"},"host":{"name":"nttdisbdb02","architecture":"x86_64","os":{"codename":"Core","platform":"centos","version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-862.3.2.el7.x86_64"},"id":"9d4ba8aa77aa49ccba4f96e8d89eb7c3","containerized":false,"hostname":"nttdisbdb02"},"agent":{"id":"97520869-259a-4d60-bb46-5eecf412f37d","version":"7.4.0","type":"filebeat","ephemeral_id":"63c540f6-fc7f-4395-a773-191167472208","hostname":"nttdisbdb02"},"log":{"offset":1099629,"file":{"path":"/var/log/mariadb/mariadb.log"}},"ecs":{"version":"1.1.0"}}

Thanks @Admiraludon. I think I see the problem. Your MySQL error log messages look like this:

191015  9:46:45...

Note that there are two spaces between the initial integer (which is supposed to signify the date in yymmdd format) and the time.

However, the mysql.error fileset's ingest pipeline is expecting one space between these fields: https://github.com/elastic/beats/blob/master/filebeat/module/mysql/error/ingest/pipeline.json#L13

I think we need to make the ingest pipeline be a bit more lenient by accepting any number of whitespace characters. If you have a GitHub account would you mind filing an issue for this at https://github.com/elastic/beats/issues/new/choose? That way you can participate in any discussion and follow along on the issue's progress. If not, let me know and I'll make the issue for you.

cc: @jsoriano

Shaunak

1 Like

Dear @shaunak,

I have filled an issue as per your instruction.
Please refer : https://github.com/elastic/beats/issues/14076

Thank you very much for your help :slight_smile:

Hi @Admiraludon,

Thanks for reporting this! Could you share some log lines of your MySQL/MariaDB error logs so we can use them as test cases?

Dear @jsoriano,

Please find the MariaDB error.log sample below :slight_smile:

# cat /var/log/mariadb/mariadb.log
2019-10-16 17:24:15 0 [Note] InnoDB: Using Linux native AIO
2019-10-16 17:24:15 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2019-10-16 17:24:15 0 [Note] InnoDB: Uses event mutexes
2019-10-16 17:24:15 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2019-10-16 17:24:15 0 [Note] InnoDB: Number of pools: 1
2019-10-16 17:24:15 0 [Note] InnoDB: Using SSE2 crc32 instructions
2019-10-16 17:24:15 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2019-10-16 17:24:15 0 [Note] InnoDB: Completed initialization of buffer pool
2019-10-16 17:24:15 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2019-10-16 17:24:15 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2019-10-16 17:24:15 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2019-10-16 17:24:15 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2019-10-16 17:24:15 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2019-10-16 17:24:15 0 [Note] InnoDB: Waiting for purge to start
2019-10-16 17:24:15 0 [Note] InnoDB: 10.4.8 started; log sequence number 1631101; transaction id 791
2019-10-16 17:24:15 0 [Note] InnoDB: Loading buffer pool(s) from /data/mysqldata/mysql/ib_buffer_pool
2019-10-16 17:24:15 0 [Note] Plugin 'FEEDBACK' is disabled.
2019-10-16 17:24:15 0 [Note] InnoDB: Buffer pool(s) load completed at 191016 17:24:15
2019-10-16 17:24:15 0 [Note] Server socket created on IP: '::'.
2019-10-16 17:24:15 0 [Note] Reading of all Master_info entries succeeded
2019-10-16 17:24:15 0 [Note] Added new Master_info '' to hash table
2019-10-16 17:24:15 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.4.8-MariaDB-log'  socket: '/data/mysqldata/mysql.sock'  port: 3306  MariaDB Server
2019-10-16 17:25:43 11 [Note] Event Scheduler: scheduler thread started with id 11
2019-10-16 17:25:43 11 [Note] Event Scheduler: Last execution of test.test_error_log. Dropping.
2019-10-16 17:25:43 12 [Note] Event Scheduler: Dropping test.test_error_log
2019-10-16 17:25:43 12 [ERROR] Event Scheduler: [root@localhost][test.test_error_log] hi from the error log
2019-10-16 17:25:43 12 [Note] Event Scheduler: [root@localhost][test.test_error_log] At line 1 in test.test_error_log

Below are mysql query to generate mysql error log :
(I test both slow log and error log.. but only slowlog module are working)

MariaDB [test]> SET GLOBAL event_scheduler = ON; use test; CREATE EVENT test_error_log ON SCHEDULE AT NOW() ON COMPLETION NOT PRESERVE DO SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'hi from the error log';select sleep(10);
1 Like

Hi @jsoriano,

Any updates ? Are you facing the same problem as i stated ?

Hi @Admiraludon,

I have an update. I think that your log files are fine, and they should be actually collected. Even the line with the different pattern should be captured because we have a fallback pattern to capture any line.

Please check if this query returns any document:

GET /filebeat-*/_search?q=event.dataset:mysql.error

I think that the problem may be in the date parsing. Your logs are probably there, but not at the time that you expect. If the previous query returns any document, please check what is the timestamp, and then select this time in kibana discover. You should be able to see your documents there.

I have prepared some fixes to improve timestamp handling in this pull request: https://github.com/elastic/beats/pull/14130

Dear @jsoriano ,

Yepp, you were right... Your fixes does the trick.. thanks.. :smile::smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.