Hi,
I am trying to send headers using HTTP request to kibana which is embedded in an iFrame on my UI. Ideally, kibana must receive it and pass it to the proxy I have created for elastic query and return the requested index accordingly. Challenge here is:
Headers are not evidently received as proxy takes null input and doesn't make a correct query. Request is being posted to http://localhost:5601 . Should it be posted to some other URL or what is the correct way to do it?
You are attempting to send an http request to a kibana instance located at localhost:5601 and passing in custom headers? How exactly are you doing this, can you share some code?
Can you explain a little more what you mean by proxy takes null input and doesn't make a correct query?
If you are sending the request directly to Kibana, where does the proxy you created come into play?
Maybe I could help better if you explained what the high level goal is you are trying to achieve? Is it authentication headers you wish to pass in so you can bypass the login screen? Something else?
I have a webapp where end user makes a request by selecting couple of parameters
App stores those parameters and when user hits "discover" (within the app), app should send those parameters through headers to kibana (localhost:5601) which will load within the iFrame on same page
Kibana, in turn, hits the 'elasticsearch.url', which is replaced by my proxy URL, and proxy consumes the headers and based on that, fetches the desired index from elastic
That index is loaded into kibana discover straightaway when the iFrame is rendered giving end user the feel that he made a request and got the exact result
These are not authentication headers. I am not looking for login, and I am not using Shield or any other plugin for that matter.
I have changed kibana's CORS settings in kibana.yml and setup_connection.js and tried to send a request through iFrame.
Changes:
kibana.yml- server.cors: true
setup_connection.js- cors: { additionalHeaders: ['header1','header2'], origin: ['http://localhost:8080/'] },
where header1 and header 2 are names of my http request headers, and localhost:8080 is the webapp
Either I am making a wrong request to a wrong endpoint or endpoint (kibana) is configured incorrectly.
Please also suggest if there is another way of achieving this requirement.
As far as I understand you, the problem is, that you send some (custom) headers in a request to localhost:5601 (leave aside any iFrame or not) and you would expect Kibana to add those headers to the request, that it does against your configured Elasticsearch URL?
If so you should look into the elasticsearch.requestHeadersWhitelist option in your kibana.yml. You need to specify a list of all header names, that the Kibana server is allowed to forward to Elasticsearch when making requests.
Also since this headers are only forwarded on the actual calls to Elasticsearch, you would need to make sure, that these headers are present in the actual data calls to the Kibana server, some of them highlighted in the below screenshot:
Only headers present in those calls and whitelisted in the above setting will be forwarded to your Elsaticsearch cluster. Usually adding headers to these calls will require touching some source code of Kibana.
If your current expectation is, that you just add headers to the first call in the screenshot (i.e. the original GET request to load Kibana) and those are forwarded to Elasticsearch: that will and cannot happen, since those headers are simply not present when the calls against Elasticsearch are made, due to the statelessness of HTTP.
Thanks Tim for that elaborated response. This is something I was suspecting. I have whitelisted both my expected headers at elasticsearch.requestHeadersWhitelist in kibana.yml, but as you mentioned, those headers are and cant be cosumed or appended at the required place.
My headers are not supplied to bulk_get, run or _msearch without the use of any browser plugin.
It would be really helpful if you can give me some pointers on where and how I should begin tweaking the code in kibana to make this work and let kibana accept my headers for these requests.
There is not a single individual place I could point you too. You basically would need to look at every place in Kibana, that requires data from Elasticsearch and add the headers there. Meaning these are at least all different visualization types, index patterns, etc.
You can search the source code for callWithRequest which will be called on the Kibana server to send the request to Elasticsearch. And then check from that places, which client side call actually triggered them and add your headers to that call.
For me that sounds like you will be busy the next few months with this, why I wouldn't consider that a viable solution (and also basically every update would break your code again).
If I look what you trying to achieve, I would rather suggest another approach where you might be able to use way less code modifications.
I would set those parameters on the client side (where as far as I understood you need to calculate them) into a cookie for the Kibana domain. That way they will be automatically send to every call against the Kibana server in the cookie header, and you don't need to manually attach them.
Then I would just modify the callWithRequest method, that it extracts the params you need from the cookies of the req variable again and attach them to the headers of the actual request.
However, you should be aware that you are really digging a lot in Kibana internals, and that this will mean, that a) most likely you won't survive a single update with new code modifications and retest and b) you most likely will cause weird behavior, since not every call that Kibana does against Elasticsearch is actually triggered by a call from the browser to the Kibana server. Meaning some of the calls that will hit your Elasticsearch (or more specific your proxy) will still not have any of those headers, since they were never triggered by any call from the browser and thus have no cookie information in them.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.