Hi,
I would like to connect my DB with ES through Logstash. My DB doesn't have one unique key but has multiple columns as key. For example, if DB has A, B, C, and D columns and combination of A, B, and C are unique. However, value of each column (A, B, and C) can be duplicated.
or
In case of JSON file, it doesn't have unique key attribution but has combination of multiple attribution as key (Same as DB case).
How can I index these case of DB or JSON file into ES? If someone have any idea of this, please let me know as soon as possible because I should finish my project...
Thanks,
You will need to combine (concatenate) different fields using a filter in the Logstash pipeline processing. Be aware of potential performance decrease depending on how your new ids will look like.
Thank you for your help. I have another question. Two documents which are in different types but same index can have same unique id? For example,
Type A
Document: "id":"abc", ...
Type B
Document: "id":"abc", ...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.