Hello,
I am experimenting with Logstash 6.
I have two *.conf files. One is sending results to index "sshd_fail-%{+YYYY.MM}", the second is sending results to index "idx_md-descriptions".
I have installed Kibana, and I attempt to create an index pattern. I choose my index as 'sshd_fail-*', and it shows all the available fields. Problem is that in the list of fields is also all the fields from my other index.
Elasticsearch shows my sshd_fail index to be huge:
[root@svr-h000386 incomingdata]# curl 10.11.2.11:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana eBNdoaSzTXm2OqZJ8vDvgw 1 0 2 1 11kb 11kb
yellow open sshd_fail-2018.03 Lnwzz5BnTr2HjXokB6rCHA 5 1 2275996 0 721.9mb 721.9mb
yellow open idx_md-descriptions XNgMSo9gScewN7BCwkqslw 5 1 1321214 166304 554.5mb 554.5mb
Why is Logstash apparently merging my two data sources into both indexes?
FILE1.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://svr-h003671.my-domain.local:3306/mdata"
jdbc_user => "elastic"
jdbc_password => "blah"
jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "5 * * * *"
statement => "SELECT item_code,item_description,brand_name FROM tbl_products p LEFT JOIN tbl_brands b ON b.brand_id = p.brand_id"
}
}
output {
elasticsearch {
hosts => ["10.11.2.11:9200"]
index => "idx_md-descriptions"
document_id => "%{item_code}"
}
}
FILE2.conf
input {
file {
type => "secure_log"
path => "/var/log/secure"
}
}
filter {
grok {
add_tag => [ "sshd_fail" ]
match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
}
output {
elasticsearch {
hosts => ["10.11.2.11:9200"]
index => "sshd_fail-%{+YYYY.MM}"
}
}