My company has a large amount of XML data that we want to explore for its possible value. We can convert the data to JSON in order to get it to import into ElasticSearch, but have been slowed by trying to learn enough about Logstash and Kibana to import it...hence my question here.
I am currently stuck at the point where I need to "Configure an index pattern" in Kibana...so this has been like pulling teeth!
So I though I'd describe my situation and ask for advice...
The data are semi-structured, in that there a lot of similar elements in the document schemas, but no two records are going to be the same. I was hoping to be able to dump this JSON data into Elastic, and use it to explore the JSON docs, such as how many have similar elements, the geographic distribution (it has time and location data), etc.
Can Elastic be used for this? I would assume so...but how to use Elastic to explore JSON docs where I don't have a consistent schema? Is this possible?
Thanks for the reply. At this point, the exploration consists of discovering the types of XML elements in the data, and the geographic distribution of the data (which is global). The exploration would get more refined as I learn better to use Elastic and Kibana.
I am still getting up to speed on these tools so any helpful hints are appreciated...
I have no idea how to write an "index pattern" for this, which is what the documentation refers to as essential to getting data imported. I am hoping others have encountered this same problem, but learned how to overcome it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.