Attempt to use the add() function which in my tacked on Javascript file I seem to have access to.
Attempt to use require('components/courier/data_source/search_source') to get the SearchSource object. In doing this I wasn't able to figure out how to use Private and the other parameters to ***SearchSourceFactory()***.
I'm attempting to create a button which will fire off multiple queries to Elasticsearch, each query using the previous query result in the next.
More Specific Info: I am working collecting logs from multiple machines. Each machine handles many orders which travel from one machine to the next. Each machine also has a pointer to the previous machine in the chain and the next machine in this chain. I want to trace that order. So to do this I want to get one order_id from one of the machines, and then search the parent machine and child machine for that order. If I find the order and it too has a parent, repeat the process.
Another option on how to implement the queries: I realize that sending multiple requests on the client side and doing calculations there isn't the best way to go about it as round trip time goes way up. Instead I realize I should send one request, which then on the backend fires of a bunch of queries. I know Kibana4 uses Nodejs, is extending the Nodejs backend what I should be looking into doing?
My gut response is that this may be difficult because to construct each entity I would have to build a long chain of order IDs. Each node in the chain represents one machine, each with a previous and next node pointer. These chains can potentially be added to over a span of days. I believe because of this I would have to store each node in the ES, then when I add a new one, I check to see if there is an entity that contains that order ID. If there is, then I add this new node to that entity.
However, with that solution, there is a problem that if I have 3 nodes. Each part of the chain as such: A -> B -> C. A and C were first added to ES, my preprocessing would look at the data like this: ? -> A -> ?, and ? -> C -> ? which would cause me to store the two nodes in different indices, even if they are actually from the same chain.
Once C was added, I would have to, after the fact, grab C, delete the entity, and then put C into A's entity. This seems overly complex when my data will likely never grow to a size which I would run into a speed bottleneck from doing all these queries at request time, instead of being preprocessed.
After thinking about it: I think that entity-centric indices could be an OK solution. Is the implementation I wrote about above is accurate (with the ABC stuff)? Is it correct that I would have to continually shift around data inside ES to handle the case I specified above? Also, on a more technical note, he is speaking about grouping these entities by Type or by Index. Wouldn't there be too many indices?
While this option could be good, I'm not sure if it is the most time-efficient considering the added development time vs what benefits we would gain. Our data will only scale up to a few hundred GB at most, so I don't think our the queries will be too slow considering each will be made separately and against indexed fields.
##My Solution
I finally realized that I should be extending the backend of Kibana4 written in Nodejs. I then edited:
/kibana/src/routes/index.js
and included a new URL. This one would be:
router.get('/trace-order')
From there I used the Nodejs Elasticsearch API documented here: https://github.com/elastic/elasticsearch-js to perform the queries that I needed. Then when they completed and I did my calculations, I returned all my values in JSON.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.