Kibana's Console is the easiest way to get started with Elasticsearch's REST API — syntax highlighting, autocompletion, formating, export of cURL, JavaScript, or Python. And you don't have to worry about the right endpoint, authentication,... But sometimes you need (or want) to use the shell if Kibana is not available, you have to automate something, or you want to use the output as the input for another command line tool. This article gives a quick overview of the most common questions, stumbling blocks, and some helpful tips with cURL.
Endpoint
All examples in this post use https://localhost:9200
for Elasticsearch and https://localhost:5601
for Kibana. But for your installation you might have to adjust protocol (http
or https
), host (localhost
, (sub) domain, or IP), or port (9200 and 5601 by default).
If you are using the wrong settings you might either run into an error like curl: (7) Failed to connect to localhost port 9201 after 0 ms: Could not connect to server
or cURL waiting until the request times out.
TLS
Depending on your deployment and configuration, you might have no TLS (plain-text HTTP only) or TLS (only HTTPS either self-signed or with a valid certificate).
The most common error messages you'll receive are:
curl: (35) TLS connect error: error:0A0000C6:SSL routines::packet length too long
if you use HTTPS for an HTTP configuration. Usehttp://
instead.curl: (52) Empty reply from server
if you use HTTP for an HTTPS configuration. Usehttps://
instead.curl: (60) SSL peer certificate or SSH remote key was not OK
if you use a self-signed certificate. Either ignore the certificate error by adding the-k
parameter likecurl -k https://localhost:9200
or reference the self-generated certificate file likecurl --cacert certs/http_ca.crt https://localhost:9200
.
Authentication
Authentication is one of the most common but also frustrating stumbling blocks in getting started. The usual error you'll get from a bad authentication looks something like this:
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "unable to authenticate user [elastic] for REST request [/?pretty]",
"header" : {
"WWW-Authenticate" : [
"Basic realm=\"security\", charset=\"UTF-8\"",
"Bearer realm=\"security\"",
"ApiKey"
]
}
}
],
"type" : "security_exception",
"reason" : "unable to authenticate user [elastic] for REST request [/?pretty]",
"header" : {
"WWW-Authenticate" : [
"Basic realm=\"security\", charset=\"UTF-8\"",
"Bearer realm=\"security\"",
"ApiKey"
]
}
},
"status" : 401
}
Basic Auth
Basic Auth is the classic way to authenticate against a REST API. While it is the most widely used option today, it is not supported by Elasticsearch Serverless. But for either self-managed clusters or Hosted Elastic Cloud it is the default choice.
While you should configure custom users with the right set of permissions, this article skips over authorization and sticks to the default superuser elastic
.
The simplest form of authentication is curl -XGET -u elastic:... "https://localhost:9200/"
where you need to replace ...
with the password for the elastic
user. The side-effect of running this command is that your password becomes part of your shell's history. If you want to avoid that you can either:
- Run
curl -XGET -u elastic "https://localhost:9200/"
and enter the password when prompted for it. - Add a leading space to the command, so a space before
curl
, which excludes it from the history.
API Key
If you are using a Serverless project or have configured an API key, you can use it with curl -XGET -H "Authorization: ApiKey ..." "https://localhost:9200/"
, where you need to replace the ...
with your API key. Be sure to have a single space between ApiKey
and the key — both none or more than one will fail to authenticate you.
The other part that is often adding confusion is that an Elastic Cloud account also has an API key for managing clusters (configuring it through Terraform for example). Don't mix this management API key up with the key to access the data on an individual cluster; even though both are API keys.
cURL Parameters
If you want to dive deeper into all the available options, man curl
is the best place though very extensive. For a few quick examples, tldr (either on the website or for installation) is a great starting point with tldr curl
.
We have already looked at the -k
parameter in the TLS section.
Another handy and much newer option is --json
. By default you need a couple of parameters when including a JSON body in the request:
curl -XGET -u elastic -H "Content-Type: application/json" -d '{"query":{"match_all":{}}}' "https://localhost:9200/_all/_search"
Remembering or typing -H "Content-Type: application/json"
is not great. But you don't have to any more, since --json
is doing the same request:
curl -XGET -u elastic --json '{"query":{"match_all":{}}}' "https://localhost:9200/_all/_search"
Elasticsearch Parameters
At the same time, Elasticsearch also has a couple of helpful parameters that will make your shell life a lot easier. Like adding ?pretty
to your request to get a pretty printed output back.
Or adding ?v
(for verbose) to your _cat
queries to include each column name. Turning
curl -XGET -u elastic "https://localhost:9200/_cat/nodes"
172.20.138.162 54 90 1 1.77 1.76 1.61 mv - tiebreaker-0000000003
172.20.140.25 75 100 0 0.85 1.18 1.39 himrst - instance-0000000001
172.20.143.14 12 48 0 0.71 0.85 1.07 lr - instance-0000000002
172.20.139.42 53 100 0 2.09 1.84 1.83 himrst * instance-0000000000
into
curl -XGET -u elastic "https://localhost:9200/_cat/nodes?v"
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.20.138.162 36 90 0 2.33 2.41 1.93 mv - tiebreaker-0000000003
172.20.140.25 82 100 0 2.20 1.77 1.59 himrst - instance-0000000001
172.20.143.14 16 48 0 0.59 0.81 1.01 lr - instance-0000000002
172.20.139.42 71 99 0 2.27 1.70 1.75 himrst * instance-0000000000
More Tools
Once the requests are working, the real power of the shell can start to shine: Combining multiple commands.
If your response is JSON (so excluding the _cat
API from before), jq is probably the most useful tool. For example looking at the long output of:
curl -XGET -u elastic "https://localhost:9200/_nodes/stats?pretty"
{
"_nodes" : {
"total" : 4,
"successful" : 4,
"failed" : 0
},
"cluster_name" : "21b3293a6efe45d289bed311a2213320",
"nodes" : {
...
If we are only interested in total and successful nodes, renaming it to success
in the process as well: jq gives us all the features needed (though it can be a monster to work with — be sure to either consult the documentation or ChatGPT if in doubt):
curl -XGET --silent -u elastic "https://localhost:9200/_nodes/stats" | jq "{total: ._nodes.total, success: ._nodes.successful}"
{
"total": 4,
"success": 4
}
And if you are working with the _cat
APIs then awk is worth another look; or maybe sed, cut,... depending on task and preference. Starting with a list of indices:
curl -XGET --silent -u elastic "https://localhost:9200/_cat/indices"
green open starwars 4auEgBXQTHSrxlshmdCgHg 1 1 2 0 19.6kb 9.8kb 9.8kb
green open semantic-starwars X2Q2XhxQR9CUWiMbnmUgUg 1 1 4 0 67.5kb 33.7kb 33.7kb
If you want to only extract the index name and then sorting it alphabetically:
curl -XGET --silent -u elastic "https://localhost:9200/_cat/indices" | awk '{ print $3 }' | sort
semantic-starwars
starwars
Conclusion
This was just a teaser to get you started. Once you got the basics working, the possibilities for debugging and automation are almost without limit. Go forth to Shell!