I can't see diagrams from Mysql


I'm new user of ES Stack.
I've configure Kibana with Elasticsearch, Logstash and filebeat ( on host )

I have Nginx and MariaDB server which I would like to see in Kibana

I see all beats from server in Discover but I have a probolems with Dashboards
Dashboard for Nginx works well but Mysql not working.

In "Discover" I see log from Mysql but in field message, there is no firled Mysql.xxxx like with Nginx:

Ive got information that grok parse failure

This is my ingest for mysql:

  "description": "Pipeline for parsing MySQL slow logs.",
  "processors": [{
    "grok": {
      "field": "message",
        "^# User@Host: %{USER:mysql.slowlog.user}(\\[[^\\]]+\\])? @ %{HOSTNAME:mysql.slowlog.host} \\[(%{IP:mysql.slowlog.ip})?\\](\\s*Id:\\s* %{NUMBER:mysql.slowlog.id})?\n# Query_time: %{
NUMBER:mysql.slowlog.query_time.sec}\\s* Lock_time: %{NUMBER:mysql.slowlog.lock_time.sec}\\s* Rows_sent: %{NUMBER:mysql.slowlog.rows_sent}\\s* Rows_examined: %{NUMBER:mysql.slowlog.rows_exa
mined}\n(SET timestamp=%{NUMBER:mysql.slowlog.timestamp};\n)?%{GREEDYMULTILINE:mysql.slowlog.query}"
      "pattern_definitions" : {
        "GREEDYMULTILINE" : "(.|\n)*"
      "ignore_missing": true
  }, {
      "field": "message"
  }, {
    "date": {
      "field": "mysql.slowlog.timestamp",
      "target_field": "@timestamp",
      "formats": ["UNIX"],
      "ignore_failure": true
  }, {
    "gsub": {
      "field": "mysql.slowlog.query",
       "pattern": "\n# Time: [0-9]+ [0-9][0-9]:[0-9][0-9]:[0-9][0-9](\\.[0-9]+)?$",
       "replacement": "",
       "ignore_failure": true
  "on_failure" : [{
    "set" : {
      "field" : "error.message",
      "value" : "{{ _ingest.on_failure_message }}"

This is line of my mysql-slowlog:

//# Time: 180309 12:58:07
//# User@Host: user[user] @ HOST [IP]
//# Thread_id: 91244 Schema: database QC_hit: No
//# Query_time: 0.000603 Lock_time: 0.000201 Rows_sent: 1 Rows_examined: 0
// Rows_affected: 0
//SET timestamp=1520596687;
//SELECT ................................FROM table;

What a problem is with it?

(Andrew Cholakian) #2

Have you tried debugging the pattern with https://grokdebug.herokuapp.com/ ?

I'm not familiar with slowlogs, but if they're greater than one line you'll need to read https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html to learn how to handle them correctly.

(Ry Biesemeyer) #3

Assuming that you have already configured Filebeat to treat the entirety of a multiline message as a single event, I've noticed a couple problems with the grok pattern against that input:

  • Perhaps just a side-effect of pasting the configuration in Discuss, but the literal newline characters in the pattern may be throwing things off (e.g., between Query_time: %{ and NUMBER:my; between rows_exa and mined}
  • The pattern expects your slowlog entry to start with the literal string # User@Host: (it is anchored by ^, which in regular expressions and grok patterns is a beginning-of-input anchor), but it starts with # Time:; this could cause the message not to match the pattern.
  • The pattern also doesn't seem to have a way of capturing the Thread_id line; it looks like it expects the User@Host line to be immediately followed by a Query_time line.
  • The pattern doesn't account for the Rows_affected line, expecting the Query_time line to be immediately be followed by an optional SET timestamp line and the greedy-multiline.

(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.