Dec 20th, 2024: [EN] Using external map data and ES|QL in Kibana maps

Intro

Today I want to talk about Kibana maps, as it’s sometimes overlooked as a visualization. This is because we seem to be wired, in the context of dashboards, to think of graphs, numbers and tables. Other reasons the Map visualization doesn’t get used, is that it might seem daunting to configure, or it is assumed that there is not the right data. By providing examples we hope to give ideas, lower the bar by providing examples and encourage more of its use.

In addition to Maps being awesome, we also want to call out publicly available map services and datasets. We’ll demonstrate how to include them and hopefully inspire you to seek out datasets and sources yourself to further enrich your experience and provide new insights into the data.

Navigating to Maps

First we need to get to the maps application. Navigating to the Maps application is a little different in the current latest version(8.17), in prior versions it can be found under:

Main menu -> Analytics -> Maps.

In 8.17 it depends on the solution:

  • Search:
    • Main navigation -> Other tools -> Maps
  • Observability:
    • Main navigation -> Other tools -> Maps
  • Security:
    • Main navigation -> Stack Management -> (Kibana section) Maps

Maps can also be created from dashboards:

Dashboards -> Create Dashboard (right top) -> Add panel -> Maps

Layers

After creating a map you are given a single layer called “Basemap”, this already contains some useful information. At the starting zoom level you’ll see countries outlines and names, and perhaps even regions or city names. As you zoom in however, more detail is added, all the way down to buildings and streets. Balancing the space on the screen with information or details displayed is an important way to keep things manageable. By making layers visible, or hiding them at specific zoom levels; the experience can be recreated. To limit the scope of this post however we won't configure zoom levels.

Publicly available data sources

The Basemap is based on data from OpenStreetMap and OpenMapTile, provided by the Elastic Maps Service. Please notice in the bottom right the ‘Attribution’. We’ll show how to set this for your Layers, which can be a requirement for use of some public sources.

Attribution can be found in the bottom right corner

There is a lot of publicly available geo data - with various levels of quality and reliability. The data public sector publishes tends to score well on the quality and reliability part. Please
observe any requirements for their use such as attribution and be aware of rate limiting and consider deploying a caching proxy when a dashboard is shared with a (larger) group of users.

Kibana can handle various ways of interacting with these datasets here we'll demonstrate the use of Web Map Service (WMS), geojson, csv and ndjson files. These also tend to be the keywords to use when looking for public geo datasets.

“I’m dreaming of a white Christmas” (Web Map Service)

Every year people wonder is it going to snow this year on Christmas day? Well skip the weather report and let us fire up Kibana!

Create a new map, and click the colored 'add layer' button on the right:

Then select the WMS layer (you might need to scroll down):

Here we can then enter a WMS service URL. We'll use the following URL:

https://mapservices.weather.noaa.gov/vector/services/precip/wpc_prob_winter_precip/MapServer/WMSServer?request=GetCapabilities&service=WMS

This was retrieved from https://www.weather.gov/gis/cloudgiswebservices under Current Weather and Forecast Products and Services -> Probabilistic Winter Precipitation from the Weather Prediction Center. If you're ok with just walking in the snow, there's a different service that can be found there as well.

After we enter the URL, we can click on the Load capabilities button.

This should reveal the layers and styles dropdowns that Kibana loaded from the url. Lets pick all "Day 1" Layers and styles:

Then click Add and continue.

We can then name the layer and add the attribution:

Great, we should now have a forecast on the map and we can save the map to recall it later.

If you, like most of us, tried to add all the layers and styles you might notice that it becomes a bit of a tangled mess. To combat this, and be able to toggle the day per day we recommend creating two additional layers instead and reduce the opacity of the layer more for each later day:

At this time it's forecast that in two or three days North Dakota and Maine will receive snow.

Note: WMS services, beyond styles - of which usually there is only one - don't allow much customization how the data is displayed. Opacity can be useful to tone down layers. Additionally WMS layers provide a snapshot in time. Which makes it hard to interpret historical data. Finally WMS layers don't allow for interaction with other datasets, we'll cover an alternative in the next section.

"Let it snow, Let it snow, Let it snow" (Loading geojson)

Up until now we have avoided loading data into Elasticsearch, as depending on your setup, users might not be allowed to index data. Next we'll show how to load geojson data, which will give us new options.

To grap the data we'll use a WFS service. The dataset we used previously is also exposed through a WFS service:

https://mapservices.weather.noaa.gov/vector/services/precip/wpc_prob_winter_precip/MapServer/WFSServer?request=GetCapabilities&service=WFS

The GetCapabilities request type won't return geo data directly, instead it describes what the service exposes.

https://mapservices.weather.noaa.gov/vector/services/precip/wpc_prob_winter_precip/MapServer/WFSServer?request=GetFeature&service=WFS&outputFormat=GEOJSON&typeNames=wpc_prob_winter_precip:Day_1_probability_of_at_least_4_inches_of_snow&srsName=EPSG:4326

..or as a hyperlink to click on..
(the outputFormat is changed for the link to GEOJSON+ZIP. This way we save some bandwidth and we don't end up navigating away)

The next bit you can skip if you just want to do the examples, but if you want to work with other sources yourself the following section can be useful: (you can skip to skip until here)


Let us have a look at the parameters used in the request:

Parameter Value
service WFS
request GetFeature
outputFormat GEOJSON
typeNames wpc_prob_winter_precip:Day_1_probability_of_at_least_4_inches_of_snow
srsName EPSG:4326

The service parameter should always be WFS (like the GetCapabilities URL), request should be changed to GetFeature replacing GetCapabilities if you're taking the GetCapabilities URL as a starting point.

The outputFormat may be GEOJSON, application/geo+json, application/json; subtype=geojson or even just json or application/json. To get the right value you should inspect the xml returned by the GetCapabilities URL and look for <ows:Operation name="GetFeature"> element which should have a <ows:Parameter name="outputFormat"> element containing the <ows:AllowedValues> with the values the service understands. Make sure you encode the parameters, so application/json; subtype=geojson becomes application%2Fjson%3B%20subtype%3Dgeojson

The typeNames options can, similarly to the outputFormat, be found in the GetCapabilities response. Look for <wfs:FeatureTypeList> and look for the name element under the FeatureType elements.

Last is the srsName. First, let us show you what might happen if it's incorrect:

We see polygons of the coast of Antarctica, which is unexpected when looking for a US weather forecast. This parameter lets you specify the Coordinate Reference System. Kibana always expects this to be EPSG:4326 which is the EPSG code for the commonly used WSG84.


skip until here

After downloading the file should unzip if needed and create a map again and click add layers. This we select Upload File:

We can then select the file and should see the polygons appear over the US:

We can proceed and click import file

Note: the preview only loads polygons for the preview up to a certain amount. If you're missing features, it could just be that they weren't loaded in the preview they will appear when you click Import file.

We can proceed to click import file. The file will be uploaded and the data will be indexed into Elasticsearch. Kibana will show an overview of the operations:

We can proceed by clicking Add as document layer:

We can name the layer as before but now that Kibana is rendering the polygons we have control over how they look:

Additionally we can interact with the polygons, as users by adding for example tooltips, but also the data. We'll show an example of this in the next section. The difference with the WMS layer is that by exporting and importing the data into Elasticsearch we have captured a moment in time. If we automate the process we can build up a history and recall any day.

Note: It is beyond the scope of this post, but as a practical example we could compare revenue on days with, against days without, precipitation. Which might be useful for weather depended activities.

"Letters to Santa" (EMS Boundries)

For the next section we'll need to upload a ndjson file using kibana.

note: for 8.17 and beyond it's under Management -> Integrations (right bottom) for the search solution it's current found on the 'overview' page.

Please upload and index: letters_to_santa_10000.ndjson

In the index you'll find fictional letters to Santa from Canadians. Santa would like to see which provinces receive the most letters. We're going to draw this on a map using the 'EMS Boundaries' layer. Let us create a new Map and click add layer:

After, can we select 'Canada Provinces' and click Add and continue. We'll see that the Canadian Provinces have Polygons drawn on them:

The data is provided by maps.elastic.co. To fulfill Santa's request, we have to join the data. Under 'Joins' click Add term join. 'Join with -- configure term join --' will be displayed. Click on it:

For 'Left field' select 'ISO 3166-2 code'
For 'Right source' select the index that was created when you uploaded in the file. In our case 'letters-to-santa'
For 'Right field' Address.ProvinceCode.keyword

The data is now joined and we can see there's a count when we hover over the provinces. Under 'Tooltip fields' you can add 'name (en)'

Hovering over each province is cumbersome and doesn't allow interpret by just looking at the map. So, let us change the color of the province based on count. Under 'Layer style' change the 'Fill color' from solid to 'By value'. Then change 'select a field' to 'count of letters-to-santa'.

Then as a last change, we'll configure 'Label' in the same section. Change 'Fixed' to 'By value' and select 'name (en)':

We can now click Keep changes.

"Where's Santa's workshop" (ES|QL)

Lets start simple by adding Santa's workshop as a point on the map. To do this, we'll make use of the recently added 'ES|QL' layer. To add click add Layer and pick ES|QL:

This will then show the following input, which expects a ES|QL query returning at least one geodata field:

In this section we can paste the following ES|QL:

ROW point = TO_GEOPOINT("POINT (0 84.8)"), label = "Santa's workshop"

Here we're defining a ROW and not doing any further processing, simply returning the row. The row has two columns - or fields: point and label. The "Point( 0 84.8)" is a string containing WKT. To interpret the string as geodata we use the string as an argument for the TO_GEOPOINT function.

Note: The second number not being 90 (which would make it the true north pole) is because of display reasons.

We can now run the query, disable the remaining settings, and click 'add and continue >':

We set:

  • The name of the layer.
  • Icon to “Building 2”.
  • Label to 'by value' and for the value we use 'label'.
  • Label position to 'Bottom'.

See also the image below, with settings marked in red:

This should result in the following image:

Alternatively we could have indexed the data, about Santa's workshop, as a document in Elasticsearch and then used ES|QL FROM command. This also would have allowed us to display multiple points. Showing a single location with a label, without having to index a document into Elasticsearch can still be a useful to add some metadata in a pinch.

And perhaps, maybe in the future the ROW command will be extended to allow specifying multiple rows. As a workaround in the current state, we can simply add a layer per entity and use a 'Layer group' layer to group, and hide these layers.

Note: We converted the geopoint from a string. This means that using ES|QL, we can work with data that wasn't indexed as a geopoint or geoshape, which can be useful if we don't have control over the indices and ingest. There is also drawback: we can't use the field efficiently to filter out objects that are off-screen when moving around on the map and refreshing the layer.

"More letters to Santa" (ES|QL)

We can recreate the "letters to Santa" using ES|QL layer as well. We do however need to download the Canadian Provinces dataset ourselves and upload it.

Step 1 - Download Elastic Maps Service boundaries:

  1. Navigate to Elastic Maps Service landing page.
  2. In the left sidebar, select an administrative boundary. (In our case: 'Canadian Provinces')
  3. Click Download GeoJSON button.

Step 2 - Index Elastic Maps Service boundaries (see 'Let it snow, Let it snow, Let it snow')

  1. Open Maps.
  2. Click Add layer, then select Upload GeoJSON.
  3. Upload the GeoJSON file and click Import file.

We also need an enrich policy to lookup the geometry based on the province iso code. Go to devtools. Then enter and execute the following:

PUT /_enrich/policy/ca_province_lookup_geometry
{
  "match": {
    "indices": "canada_provinces_v1",
    "match_field": "iso_3166_2",
    "enrich_fields": ["iso_3166_2", "label_en", "label_fr", "geometry"]
  }
}

We expect the following response:

{
  "acknowledged": true
}

Then execute the policy:

POST /_enrich/policy/ca_province_lookup_geometry/_execute

We expect the following response:

{
  "status": {
    "phase": "COMPLETE"
  }
}

As in "Where's Santa's workshop", let us create a new Map and add an 'ES|QL' layer. As a query will enter the following:

FROM 	letters-to-santa
| STATS count(*) BY Address.ProvinceCode
| ENRICH ca_province_lookup_geometry ON Address.ProvinceCode

Run the query then Click Add and continue

Like with "Letters to santa" configure Fill color: 'By value' and 'count(*)'. We now have the same result:

But we did gain things: we now have more control over how the match is made. Say one index uses uppercase and the lookup index uses lowercase, we could use the string function TO_LOWER to still match. We could also use another 'mapping'-index with an enrich policy for more flexibility.

But what if we don't have an ISO code for Canadian provinces - we usually don't - or even province names? ES|QL actually enables an option we didn't have before: aggregating points by polygon.

To demonstrate we must create another Enrich policy, this time we'll use geo_match:

PUT /_enrich/policy/ca_province_lookup
{
  "geo_match": {
    "indices": "canada_provinces_v1",
    "match_field": "geometry",
    "enrich_fields": ["iso_3166_2", "label_en", "label_fr", "geometry"]
  }
}

And execute the policy:

POST /_enrich/policy/ca_province_lookup/_execute

if we now create a Map with an ES|QL layer again we can input the following query:

FROM letters-to-santa
| ENRICH ca_province_lookup ON Location
| STATS count(*) BY iso_3166_2, geometry

We get the same result without relying on the address:

Note: this requires us to do a lookup for each letter before we can aggregate which means this type of query will take longer the more documents we need to enrich. Accuracy should also be considered: locations that are close to the border might not be matched correctly. Specifically the EMS datasets that we are using are optimized for displaying purposes. You can try and zoom in to a coast line to get an idea.

3 Likes