Why is it merging my indexes?

Hello,

I am experimenting with Logstash 6.

I have two *.conf files. One is sending results to index "sshd_fail-%{+YYYY.MM}", the second is sending results to index "idx_md-descriptions".

I have installed Kibana, and I attempt to create an index pattern. I choose my index as 'sshd_fail-*', and it shows all the available fields. Problem is that in the list of fields is also all the fields from my other index.

Elasticsearch shows my sshd_fail index to be huge:
[root@svr-h000386 incomingdata]# curl 10.11.2.11:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana eBNdoaSzTXm2OqZJ8vDvgw 1 0 2 1 11kb 11kb
yellow open sshd_fail-2018.03 Lnwzz5BnTr2HjXokB6rCHA 5 1 2275996 0 721.9mb 721.9mb
yellow open idx_md-descriptions XNgMSo9gScewN7BCwkqslw 5 1 1321214 166304 554.5mb 554.5mb

Why is Logstash apparently merging my two data sources into both indexes?

FILE1.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://svr-h003671.my-domain.local:3306/mdata"
jdbc_user => "elastic"
jdbc_password => "blah"
jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "5 * * * *"
statement => "SELECT item_code,item_description,brand_name FROM tbl_products p LEFT JOIN tbl_brands b ON b.brand_id = p.brand_id"
}
}
output {
elasticsearch {
hosts => ["10.11.2.11:9200"]
index => "idx_md-descriptions"
document_id => "%{item_code}"
}
}

FILE2.conf
input {
file {
type => "secure_log"
path => "/var/log/secure"
}
}
filter {
grok {
add_tag => [ "sshd_fail" ]
match => { "message" => "Failed %{WORD:sshd_auth_type} for %{USERNAME:sshd_invalid_user} from %{IP:sshd_client_ip} port %{NUMBER:sshd_port} %{GREEDYDATA:sshd_protocol}" }
}
}

output {
elasticsearch {
hosts => ["10.11.2.11:9200"]
index => "sshd_fail-%{+YYYY.MM}"
}
}

Hello,
I have tried to split out my input and output, using conditionals, but now my database index isn't working at all:

input {
    jdbc {
        jdbc_connection_string => "jdbc:mysql://svr-h003671.hayley-group.local:3306/masdata"
        jdbc_user => "elastic"
        jdbc_password => "iGr0up!T"
        jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_paging_enabled => "true"
        jdbc_page_size => "50000"
        schedule => "5 * * * *"
        statement => "SELECT item_code,item_description,brand_name FROM tbl_products p LEFT JOIN tbl_brands b ON b.brand_id = p.brand_id"
		tags => "idx-md_descriptions"
    }
	file {
		tags => "secure_log"
		path => "/var/log/secure"
	}
}
output {
	if "idx-md_descriptions" in [tags] {
		elasticsearch {
			hosts => ["10.11.2.11:9200"]
			index => "idx-md_descriptions"
			document_id => "%{item_code}"
		}
	}
	else if "secure_log" in [tags] {
		elasticsearch {
			hosts => ["10.11.2.11:9200"]
			index => "sshd_fail-%{+YYYY.MM}"
		}
	}
}

Only one index being reported by ES:

[root@svr-h000386 conf.d]# curl 10.11.2.11:9200/_cat/indices?v
health status index             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana           eBNdoaSzTXm2OqZJ8vDvgw   1   0          2            1       11kb           11kb
yellow open   sshd_fail-2018.03 RGIrgbvxTaSnCOYG8yXJhA   5   1          6            0     27.5kb         27.5kb
[root@svr-h000386 conf.d]#

That looks like the right approach. Do you get any indexing errors in the logstash logs? Can you add a stdout { codec => rubydebug } output and see what one of those idx-md_descriptions events looks like?

You did wait for the schedule of the jdbc input to trigger, right?

Hi Badger,

Thanks for your reply. I was working on this for hours yesterday, more than enough time for the 5 minute window... but nothing... until this morning! I have got in the office today to find the DB index has now built.

Weird.

It's not a 5 minute window, that cron schedule runs at 5 minutes past each hour.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.