Filebeat mysql module slowlog - error.message: Provided Grok expressions do not match field value

Hi, all!

I want use Filebeat MySQL Module for collecting slowlogs into Elasticsearch to view in Kibana Dashboard.

I would like to resolve this issue only without using Logstash, only filebeat module.

Anyone can help me to create grok pattern for this case?

My versions:

Elasticsearch version: 6.3.0
MariaDB Versions: 5.5.56-MariaDB and 10.1.21-MariaDB
Filebeat version: 6.3.0
Kibana version: 6.3.0

This is my log pattern:

# Time: 180613 11:04:36
# User@Host: root[root] @ localhost [ ]
# Thread_id: 5  Schema:   QC_hit: No
# Query_time: 2.000652  Lock_time: 0.000000  Rows_sent: 1  Rows_examined: 0
SET timestamp=1528898676;
select sleep(2);

Error Message:

error.message Provided Grok expressions do not match field value: [# User@Host: root[root] @ localhost []\n# Thread_id: 313237 Schema: QC_hit: No\n# Query_time: 35.000261 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0\n# Rows_affected: 0\nSET timestamp=1528825570;\nSELECT SLEEP(35);]

Default "pipeline.json" file

cat /usr/share/filebeat/module/mysql/slowlog/ingest/pipeline.json

{
  "description": "Pipeline for parsing MySQL slow logs.",
  "processors": [{
    "grok": {
      "field": "message",
      "patterns":[
        "^# User@Host: %{USER:mysql.slowlog.user}(\\[[^\\]]+\\])? @ (%{HOSTNAME:mysql.slowlog.host})? \\[(%{IP:mysql.slowlog.ip})?\\](\\s*Id:\\s* %{NUMBER:mysql.slowlog.id})?\n# Query_time: %{NUMBER:mysql.slowlog.query_time.sec}\\s* Lock_time: %{NUMBER:mysql.slowlog.lock_time.sec}\\s* Rows_sent: %{NUMBER:mysql.slowlog.rows_sent}\\s* Rows_examined: %{NUMBER:mysql.slowlog.rows_examined}\n(SET timestamp=%{NUMBER:mysql.slowlog.timestamp};\n)?%{GREEDYMULTILINE:mysql.slowlog.query}"
        ],
      "pattern_definitions" : {
        "GREEDYMULTILINE" : "(.|\n)*"
      },
      "ignore_missing": true
    }
  }, {
    "remove":{
      "field": "message"
    }
  }, {
    "date": {
      "field": "mysql.slowlog.timestamp",
      "target_field": "@timestamp",
      "formats": ["UNIX"],
      "ignore_failure": true
    }
  }, {
    "gsub": {
      "field": "mysql.slowlog.query",
       "pattern": "\n# Time: [0-9]+ [0-9][0-9]:[0-9][0-9]:[0-9][0-9](\\.[0-9]+)?$",
       "replacement": "",
       "ignore_failure": true
    }
  }],
  "on_failure" : [{
    "set" : {
      "field" : "error.message",
      "value" : "{{ _ingest.on_failure_message }}"
    }
  }]
}

Default "slowlog.yml" file

cat /usr/share/filebeat/module/mysql/slowlog/config/slowlog.yml

type: log
paths:
{{ range $i, $path := .paths }}
 - {{$path}}
{{ end }}
exclude_files: ['.gz$']
multiline:
  pattern: '^# User@Host: '
  negate: true
  match: after
exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*']   # Exclude the header

Thanks!

Hi @Rodrigo_Floriano,

You can use filebeat just with Elasticsearch and without Logstash, filebeat module pipelines work with Elasticsearch ingest nodes.
Regarding your problem, we have received about incompatibilities with recent MySQL/MariaDB versions like this one: https://github.com/elastic/beats/issues/6665, your problem is probably related to these issues.

Thanks for you reply @jsoriano!

I read this issue before open my issue, but the log pattern is different, i don't use percona server.

I identified 3 different patterns for mysql slowlogs, I belive that grok pattern needed adjust for this log types, but i tried many combinations unsucess :confused: .

Output default for "5.5.56-MariaDB and 10.1.21-MariaDB" (My case)

# Time: 180613 11:04:36
# User@Host: root[root] @ localhost [ ]
# Thread_id: 5  Schema:   QC_hit: No
# Query_time: 2.000652  Lock_time: 0.000000  Rows_sent: 1  Rows_examined: 0
SET timestamp=1528898676;
select sleep(2);

Other output that i found for other versions

# Time: 171011 11:55:48
# User@Host: root[root] @  [10.254.254.91]  Id: 54014516
# Query_time: 43.426563  Lock_time: 0.000093 Rows_sent: 63264  Rows_examined: 63264
SET timestamp=1507694148;
select sleep(2);

Default percona server log:

# Time: 2018-03-26T08:03:59.598547Z
# User@Host: root[root] @ localhost []  Id: 37045034
# Schema:   Last_errno: 0  Killed: 0
# Query_time: 10.000204  Lock_time: 0.000000  Rows_sent: 1  Rows_examined: 0  Rows_affected: 0
# Bytes_sent: 57  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# QC_Hit: No  Full_scan: No  Full_join: No  Tmp_table: No  Tmp_table_on_disk: No
# Filesort: No  Filesort_on_disk: No  Merge_passes: 0
# No InnoDB statistics available for this query
# Log_slow_rate_type: session  Log_slow_rate_limit: 100
SET timestamp=1522051439;
select sleep(10);

Anyone? Any idea how to parse the log?

I tested many patterns but unsucess...keeping testing

My last FAIL pattern test:

"^# User@Host: %{USER:mysql.slowlog.user}\\[%{USER:mysql.slowlog.current_user}\\] @ %{HOSTNAME:mysql.slowlog.host}? \\[(%{IP:mysql.slowlog.ip})?/n# Thread_id: %{NUMBER:mysql.slowlog.thread_id} /s*Schema: [A-z0-9]? /s*QC_hit: %{WORD:mysql.slowlog.qc_hit}/n# Query_time: %{NUMBER:mysql.slowlog.query_time} /s*Lock_time: %{NUMBER:mysql.slowlog.lock_time} /s*Rows_sent: %{NUMBER:mysql.slowlog.rows_sent} /s*Rows_examined: %{NUMBER:mysql.slowlog.rows_examined}/nSET timestamp=%{NUMBER:timestamp};%{GREEDYMULTILINE:mysql.slowlog.query}"

I was able to parse in Grok Debugger, but I still have an error in my pipeline. :confused:

Grok Debugger:

https://grokdebug.herokuapp.com/

LOG

# User@Host: root[root] @ localhost [ ]\n# Thread_id: 5 Schema: QC_hit: No\n# Query_time: 2.000652 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0\nSET timestamp=1529175026;\nselect sleep(2);

Pattern

^# User@Host:%{SPACE}%{USER:user}\[%{USER:current_user}\]%{SPACE}@%{SPACE}%{HOSTNAME:host}?%{SPACE}\[(%{IP:ip})?..+\#%{SPACE}Thread_id:%{SPACE}%{NUMBER:thread_id}%{SPACE}%{SPACE}Schema:%{SPACE}%{WORD:schema}?%{SPACE}%{SPACE}QC_hit:%{SPACE}%{WORD:qc_hit}..+\# Query_time: %{NUMBER:query_time}%{SPACE}%{SPACE}Lock_time: %{NUMBER:lock_time}%{SPACE}%{SPACE}Rows_sent: %{NUMBER:rows_sent}%{SPACE}%{SPACE}Rows_examined: %{NUMBER:rows_examined}..+SET timestamp=%{NUMBER:timestamp};..%{GREEDYDATA:query}

Result:

{
  "user": [
    [
      "root"
    ]
  ],
  "current_user": [
    [
      "root"
    ]
  ],
  "host": [
    [
      "localhost"
    ]
  ],
  "ip": [
    [
      null
    ]
  ],
  "thread_id": [
    [
      "5"
    ]
  ],
  "schema": [
    [
      null
    ]
  ],
  "qc_hit": [
    [
      "No"
    ]
  ],
  "query_time": [
    [
      "2.000652"
    ]
  ],
  "lock_time": [
    [
      "0.000000"
    ]
  ],
  "rows_sent": [
    [
      "1"
    ]
  ],
  "rows_examined": [
    [
      "0"
    ]
  ],
  "timestamp": [
    [
      "1529175026"
    ]
  ],
  "query": [
    [
      "select sleep(2);"
    ]
  ]
}

I finally solved it! :sunglasses: :clap::clap::raised_hands:

I wrote an article with the solution in my Blog (In Portuguese).

https://churrops.io/2018/06/18/elastic-modulo-mysql-do-filebeat-para-capturar-slowlogs-slow-queries/

Thanks!

@Rodrigo_Floriano great! your post looks nice :slightly_smiling_face:

Would you like to contribute your grok expressions to the current mysql module so we can support these recent MariaDB versions?

1 Like

Sure!!! ...

I would be very happy with that!

Hi,

I ran into the same issue on a Docker setup with Filebeat 6.3.0, Elasticsearch 6.3.0 and MySQL 5.7.22.
I got slow logs like this:

# Time: 2018-06-26T09:26:23.578890Z
# User@Host: toto[toto] @  [172.23.0.9]  Id:     5
# Query_time: 0.530174  Lock_time: 0.000000 Rows_sent: 0  Rows_examined: 0
use foo_test;
SET timestamp=1530005183;
CREATE TABLE bar (baz VARCHAR(255) NOT NULL, PRIMARY KEY(baz)) DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci ENGINE = InnoDB;

I solved like this (inspired from @Rodrigo_Floriano)

/usr/share/filebeat/module/mysql/slowlog/config/slowlog.yml

type: log
paths:
{{ range $i, $path := .paths }}
 - {{$path}}
{{ end }}
exclude_files: ['.gz$']
multiline:
  pattern: '^# Time:'
  negate: true
  match: after
exclude_lines: ['^[\/\w\.]+, Version: .* started with:.*'] # Exclude the header

/usr/share/filebeat/module/mysql/slowlog/ingest/pipeline.json

{
  "description": "Pipeline for parsing MySQL slow logs.",
  "processors": [{
    "grok": {
      "field": "message",
      "patterns":[
        "^# Time: %{TIMESTAMP_ISO8601:mysql.slowlog.time}\n# User@Host: %{USER:mysql.slowlog.user}\\[%{USER:mysql.slowlog.current_user}\\] @ %{HOSTNAME:mysql.slowlog.host}? \\[%{IP:mysql.slowlog.ip}?\\]%{SPACE}Id:%{SPACE}%{NUMBER:mysql.slowlog.id}\n# Query_time: %{NUMBER:mysql.slowlog.query_time.sec}%{SPACE}Lock_time: %{NUMBER:mysql.slowlog.lock_time.sec}%{SPACE}Rows_sent: %{NUMBER:mysql.slowlog.rows_sent}%{SPACE}Rows_examined: %{NUMBER:mysql.slowlog.rows_examined}\n((use|USE) .*;\n)?SET timestamp=%{NUMBER:mysql.slowlog.timestamp};\n%{GREEDYDATA:mysql.slowlog.query}"
      ],
      "pattern_definitions" : {
        "GREEDYMULTILINE" : "(.|\n)*"
      },
      "ignore_missing": false
    }
  }, {
    "remove":{
      "field": "message"
    }
  }, {
    "date": {
      "field": "mysql.slowlog.time",
      "target_field": "@timestamp",
      "formats": ["ISO8601"],
      "ignore_failure": true
    }
  }],
  "on_failure" : [{
    "set" : {
      "field" : "error.message",
      "value" : "{{ _ingest.on_failure_message }}"
    }
  }]
}

main differences are:

  • an optionnal use ....; line in grok pattern
  • ISO8601 timestamp
  • . instead of _ in grok pattern named captures

Don't forget to add these files in your docker-compose.yml:

filebeat:
        image: docker.elastic.co/beats/filebeat:6.3.0
        ...
        volumes:
            - ...
            - ./your/path/to/mysql/slowlog/ingest/pipeline.json:/usr/share/filebeat/module/mysql/slowlog/ingest/pipeline.json:ro
            - ./your/path/to/mysql/slowlog/config/slowlog.yml:/usr/share/filebeat/module/mysql/slowlog/config/slowlog.yml:ro
1 Like

@jsoriano, see my pull request: https://github.com/elastic/beats/pull/7422

Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.