Logstash jdbc input plugin multiple tables on elasticsearch output plugin

Hello everyone,
I am delighted to be part of this community .

I start with the elastic stack suite and i'm confronted with an issue

I use the plugin jdbc of logstash to access a mysql bd containing seven tables, I then put the request to each table in the files "accounts.conf"; "Connections.conf"; "Credits.conf"; "Customers.conf"; "Inboxs.conf"; "Outboxs.conf"; "Prefixs.conf"

The problem is that the connection type documents of the index connections are found in the accounts index with the type account

this is the content of accounts.conf

input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-5.1.17.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://my_ip_server:3306/my_server"
jdbc_user => "my_user"
jdbc_password => "my_password"
schedule => "*/5 * * * *"
use_column_value => true
tracking_column => account_id
statement => "SELECT AccID AS account_id, AccName AS name, AccType AS type, AccStatus AS status, AccNote AS note, AccDisabled AS disabled FROM accounts"
type => "account"
}
}

output {
elasticsearch {
hosts => "http://my_elasticsearch:9200"
user => "my_user"
password => "my_password"
index => "accounts"
document_type => "account"
document_id => "%{account_id}"
}
}

this is the content of connections.conf

input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-5.1.17.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://my_ip_server:3306/my_server"
jdbc_user => "my_user"
jdbc_password => "my_password"
schedule => "*/10 * * * *"
use_column_value => true
tracking_column => connection_id
statement => "SELECT ConId AS connection_id, CustId AS customer_id, ConAccountid AS account_id, ConPort AS port, ConAccountname AS account_name, ConAccounttype AS account_type FROM connection"
type => "connection"
}
}

output {
elasticsearch {
hosts => "http://my_elasticsearch:9200"
user => "elastic"
password => "changeme"
index => "connections"
document_type => "connection"
document_id => "%{connection_id}"
}
}

this is the content of credits.conf

input {
jdbc {
jdbc_driver_library => "/usr/share/java/mysql-connector-java-5.1.17.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://my_ip_server:3306/my_server"
jdbc_user => "my_user"
jdbc_password => "my_password"
schedule => "*/15 * * * *"
use_column_value => true
tracking_column => credit_id
statement => "SELECT AccID AS credit_id, AccName AS name, AccBalance AS balance, AccReserved AS reserved, AccOverdraft AS overDraft, AccLowBalance AS lowBalance, AccTimeInterval AS timeInterval, AccContact AS contact, AccNote AS note, AccDisabled AS disabled FROM credits"
type => "credit"
}
}

output {
elasticsearch {
hosts => "http://my_elasticsearch:9200"
user => "elastic"
password => "changeme"
index => "credits"
document_type => "credit"
document_id => "%{credit_id}"
}
}

Thank you!

Logstash has a single event pipeline. It doesn't care if you split your configuration into multiple files. The events from all inputs in all configuration files will reach all filters and all outputs unless you use conditionals. For example, if you say

output {
  if [type] == "credit" {
    elasticsearch {
      hosts => "http://my_elasticsearch:9200"
      user => "elastic"
      password => "changeme"
      index => "credits"
      document_type => "credit"
      document_id => "%{credit_id}"
    }
  }
}

only credit events will reach the credits index.

Hello Magnus Bäck,

it's alright It works.

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.