Kibana Maps features not displaying (failed net::ERR_CONTENT_DECODING_FAILED)

Hi,

All layers on my Kibana maps are reporting No result found. with the minusInCircle icon from EUI (Elastic UI).

When I use the term "features", it is in reference of the Elastic Guide on Kibana > Maps > Troubleshoot Maps in section Features are not displayed as it describes best my issue. (Features = points on a map, correct me if I'm wrong)

Some information will be redacted as domains and data involved (indices' names as well) might be sensitive.

Here's what I've already double checked :

  • [Mapping] The mapping for the field containing the location is geo_point. I use dynamic mapping to map fields with a name equal to location to the type geo_point. In the index's mapping, the field location is "type": "geo_point". The field consists of a JSON object containing to values : lat and lon and it's shown as POINT (X.XXXXXXX X.XXXXXX) in Kibana Discover (X's are placeholders).

  • [Filters] No filter is applied in the global search bar and the Data View (formerly called Index Pattern) is not using the time filter (there's no Timestamp field). Just in case, I've extended the time filter to huge values, with no more success than before.

  • [Inspector] The Inspect button right next to the Map settings button shows no requests. The following message is displayed : No requests logged The element hasn't logged any requests (yet). This usually means that there was no need to fetch any data or that the element has not yet started fetching data.

  • [Inverted lat/lon] I've looked up through "the entire world" for misplaced points (inverted lat/lon coordinates), as you can imagine, there was none.

  • [Data exists ?] I've checked if I had data with geo points and yes. In Kibana Discover, with my Data View in Field statistics (Beta) mode, I can see that 96.22% of my documents have a field location. I can even see a little map on the right side with all my data correctly placed. But when I'm click on the Visualize button on my location field, I'm redirected to a Kibana Map with the same errors described in the second paragraph of this post.

  • [Simple sample data] I've created simple sample data by hand and (unfortunately) successfully reproduced the behavior. This is how I created a sample geo_point :

    PUT kibana-map-test
    {
        "mappings": {
            "properties": {
                "location": {
                    "type": "geo_point"
                }
            }
        }
    }
    
    PUT kibana-map-test/_doc/1
    {
        "location": {
            "lat": 48.858370,
            "lon": 2.294481
        }
    }
    
  • [Kibana sample data] I've added Kibana sample web logs data and the [Logs] Total Requests and Bytes map works. However, I cannot see the same requests on the same URL endpoint as when the data is sourced from an index through my browser's DevTools. I've tried adding a layer with data from my own index onto that working sample map, but same as before.

  • [Kibana Logs] There's not particular logs emitted from Kibana.

Here's what I can tell you :

  • [Environment] My entire Elastic Stack (which consists of Elasticsearch, Kibana, Logstash here) is in the latest version as I'm writing this (8.1.2) and is on premise on a fully up to date Debian 11.3 machine.

  • [Certs] Kibana and Elasticsearch both have x509 https certs issued by my organization. My organization's CA cert and bothemp Kibana and Elasticsearch certs are in a Java Trustore and it is provided in the configuration file of both. (I would like to emphasize the fact that besides Kibana Maps, everything is working as excepted with no strange behevior to notice.)

  • [Reverse proxy] Nginx is placed in front of Kibana as a reverse proxy, mainly (and solely) for tcp 443 to tcp 5601 redirection. Here's part of my nginx configuration related to Kibana :

    location / {
        proxy_pass https://kibana.[REDACTED]:5601;
        proxy_set_header Host $host;
        proxy_buffering on;
        proxy_buffers 8 256k;
        proxy_buffer_size 256k;
    }
    

    ([REDACTED] is a placeholder and isn't part of the original configuration file)

    The rest of the configuration file is standard nginx configuration (ssl, server_name, etc.).

  • [Kibana compression] Kibana compression (server.compression.* settings in kibana.yml) is not configured, so default values apply. I tried to disable data compression by setting server.compression.enabled to false : it didn't work.

  • [Browser compatibility] I used all kind of browsers to test the described behavior (outdated ones, most up-to-date ones, Firefox, Edge Chromium, Chromium) with no variation.

  • [Observed network activity] In the DevTools, I can see requests to Kibana occurring when adding my Data View to a new layer in a map. Those requests' URLs are "https://kibana.[REDACTED]/s/[MY_SPACE_NAME]/api/maps/mvt/getTile/[x]/[x]/[x].pbf?[SOME_QUERY_PARAMETERS]". The HTTP status is "200" but the browser displays them as (failed) net::ERR_CONTENT_DECODING_FAILED.

    • Those requests have a Content-Encoding response header set to gzip, the Accept-Encoding should be gzip, deflate, br like all other request, but I can't confirm as only Provisional headers are shown.

    • Content-Type in response headers is set to application/x-protobuf.

  • [Fit to data bounds] The Fit to data bounds button centers correctly the map view to what it should be if data points were visible.

Configuration file

Here's my kibana.yml config file ([SOMETHING] are placeholders) :

server:
    publicBaseUrl: "https://kibana.[REDACTED]"
    name: "kibana.[REDACTED]"
    host: kibana.[REDACTED]
    port: 5601
    ssl:
        enabled: true
        certificate: /etc/kibana/ssl/kibana.[REDACTED].cert
        key: /etc/kibana/ssl/kibana.[REDACTED].key

elasticsearch:
    hosts:
        - https://elasticsearch.[REDACTED]:9200
    username: kibana_system
    password: [KIBANA_SYSTEM_PASSWORD]
    ssl:
        truststore:
            path: [TRUSTSTORE_PATH]
            password: "[TRUSTSTORE_PASSWORD]"
        verificationMode: full

xpack:
    encryptedSavedObjects:
        encryptionKey: "[MY_ENCRYPTION_KEY]"

    fleet:
        agents:
            enabled: true
        registryProxyUrl: "[MY_ORGANIZATION_PROXY]"


    reporting:
        roles:
            enabled: false

    security:
        loginAssistanceMessage: "[SOME_HELP_MESSAGE]"

        authc.providers:
            basic.basic1:
                order: 0
                icon: "logoElasticsearch"
                description: "Se connecter avec des identifiants Elasticsearch"
                hint: "Accès via la base d'utilisateurs interne à Elasticsearch"
            anonymous.anonymous1:
                order: 1
                description: "Se connecter anonymement"
                hint: "[SOME_HELP_MESSAGE]"
                icon: "logoElasticStack"
                credentials: "elasticsearch_anonymous_user"

logging:
    appenders:
        file:
            type: file
            fileName: /var/log/kibana/kibana.log
            layout:
                type: pattern
    root:
        appenders: [default, file]

I tried a lot of things, server-side, client-side at different levels of the OSI model with no results.
It only goes stranger as I've done no configuration change whatsoever between when Kibana maps worked and now.
I completely purged and reintalled Kibana on my server (keeping only kibana.yml and ssl certs/keys and deleting /var/lib/kibana) with no success.

I'm available to answer any question or complementary information request you might have to solve this.

Hi @quentin.renoux Welcome to the community. Thanks for the detailed post.

I just tried this an initially got the same result and fixed it with the following.
It was initially set to vector tiles.... and did not show up not sure why.

@quentin.renoux Hmmm I can not seem to get the vector tiles to work at all (I tried a shape to ...no luck) ... I will ask some internal questions.

Thanks @stephenb,

I'm a bit flabbergasted but this simple solution (set the Scaling setting to anything other than Use vector tiles) fixed my problem.

However, I'm clueless about why it worked.

I neved had to use this parameter in the past and my dataset contains fewer than 10,000 documents.

I'm really interested in why and how that works.

Thanks a lot.

@quentin.renoux Its Looking like it is a bug on self managed ... we reproduced internally... stay tuned... its not you :wink:

1 Like

@quentin.renoux

I'm a bit flabbergasted but this simple solution (set the Scaling setting to anything other than Use vector tiles ) fixed my problem.

8.1 switched default scaling from 'limit results' to 'vector tiles'. There is a large implemenation difference between the 2. 'limit results' uses Elasticsearch _search API and returns results as JSON formatted string. 'vector tiles' uses Elasticsearch _mvt API and returns results as binary Protocol Buffer containing Vector tiles.

1 Like

@quentin.renoux

We think we have a work around / temp fix.

Can you try setting this in elasticsearch.yml

http.compression: true

NOTE : There could be some security concerns with that setting

Follow Here : [Maps] Vector tiles from Elasticsearch are not loading in on-prem instances · Issue #130291 · elastic/kibana · GitHub

Yes, setting http.compression to true in elasticsearch.yml make the vector tiles work again.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.