MySQL fields not visible in Kibana

I have loaded data into Elastic Seach using logstash and a JDBC MysQL driver.

However, in Kibana, I see stuff like:

cluster_uuid:xxxxxxxxxxxxxxxxx timestamp:April 11th 2017, 05:37:39.580 source_node.uuid:yyyyyyyyyyyyyyyysource_node.host:12.18.169.0 source_node.transport_address:13.0.18.169:12346 source_node.ip:10.18.169.0 source_node.name:instance-0000000000 source_node.attributes.logical_availability_zone:zone-0 source_node.attributes.availability_zone:us-west-4d-2048source_node.attributes.region:us-west-2 kibana_stats.kibana.uuid:uuid-xxxxx-yyyyy-zzzzzzz

... and I cannot see any of the fields from my MySQL table.
I get the feeling I'm missing something obvious.

What is the simplest config that I can use to load MySQL data and configure things like count, average and sum for different columns?

Hi Ferdi,

Based on the stuff you are seeing in Kibana, it seems to me that you might be pointing to a different index in Kibana than the one containing your data from MySQL.

What is the name of the index (or indices) containing your MySQL data? If you didn't customize it in your Logstash config's elasticsearch output section, the name will be logstash -<YYYY.MM.DD> (that is, one index per date).

Thank you for your reply @shaunak
This is my logstash config:

input {
        jdbc {
                jdbc_connection_string => "jdbc:mysql://localhost:3306/pone_test"
                jdbc_user => "mysql_user"
                jdbc_password => "mysql_password"
                jdbc_driver_library => "/usr/lib/jvm/java-8-oracle/jre/lib/mysql-connector-java-5.1.41-bin.jar"
                jdbc_driver_class => "com.mysql.jdbc.Driver"
                statement => "SELECT * FROM estimate WHERE bill_no > 0"
                add_field => {
                        "field1" => "estimate_id"
                        "field2" => "estimate_uuid"
                        "field3" => "order_no"
                        "field4" => "order_dt"
                        "field5" => "bill_dt"
                        "field6" => "bill_code"
                        "field7" => "net_amount"
                }
        }
}

output {
        stdout { codec => json_lines }
        elasticsearch {
                hosts => "https://elastic_endpoint:9243/"
                user => "elastic_user"
                password => "elastic_password"
                index => "pone_test"
        }

}

In the Kibana management, index pattern is filled in by default to logstash-*.
If I fill in pone_test, I get the message 'Unable to fetch mapping. Do you have indices matching the pattern?' and am unable to save the setting. So I used *.

I'm still confused and going around in circles. I just want to see my mysql table fields in Kibana. All I currently see is all sorts of json data with IPs and cluster IDs and what not, but none of my mysql fields.

If you are getting the 'Unable to fetch mapping. Do you have indices matching the pattern?' error in Kibana, its quite possible the pone_test index hasn't been created in Elasticsearch. What does the output of https://elastic_endpoint:9243/_cat/indices/pone* show you?

@shaunak

_cat/indices/pone* is blank

_cat/indices/* returns:
yellow open .kibana UIAlyHvJSVG7nVKgtgH-pw 1 1 2 0 13.5kb 13.5kb
yellow open .monitoring-kibana-2-2017.04.18 p1AgpC6aT0qEF9Ws0sUmBw 1 1 1000 0 538.3kb 538.3kb
yellow open .monitoring-data-2 XLlJSvmOQ_ixIaXeagVglw 1 1 1 0 5.9kb 5.9kb
green open .security dUp7IUwnQO2bNPND8j8rKw 1 0

The closest thing I found to setting an index in ElasticSearch is Automatic index creation enable/disable.

So it looks like the pone_test index isn't even being created. When you run Logstash with that config, do you see any errors? Also, since you are using the stdout output plugin as well, do you see any of your MySQL data being output to STDOUT by Logstash?

Thanks for your patience @shaunak
Ok, looks like now we'll get somewhere :slight_smile:

I ran logstash again and this was the output:

$ /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf
Settings: Default pipeline workers: 4
Exception when executing JDBC query {:exception=>#<Sequel::DatabaseError: Java::JavaSql::SQLException: Value '0000-00-00 00:00:00' can not be represented as java.sql.Timestamp>, :level=>:warn}
Logstash startup completed
Logstash shutdown completed

I assumed this was a warning that only affected records with the 0000-00-00 00:00:00 date.
Now what do I do?

Hi Ferdi,

I don't know enough about the JDBC input plugin to help, but someone in the Logstash category might. I'll move this post over there now.

Shaunak

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.