Different responses between Kibana and REST

(Heikki Laaksonen) #1

I am trying to use the below defined query to filter out data that I do not need and select only specific documents that contain specific fields. The problem is that when I run the below defined example from Kibana Dev Tools / Console, I get different amount of hits when doing the same with Python elasticsearch==5.5.1 or directly from web browser. The search is run only from one index and this should not be related to time where the queries are made.

The response from Kibana Dev Tools is correct (from my point). But the Python elasticsearch library and Chrome web browser seem to behave so that they would not obey the exists query or interprete this as OR and return hits that do not contain the required fields.

This is with ELK stack 5.6.5 with single node configuration. There was a comment that this could be related to default null, but I would believe this is not the case (I am not sure). The index template is at the end of the post.

Different behaviors do not make sense so I am missing something, but what?

# Kibana Dev Tools / Console
GET _search
  "query": {
    "bool": {
      "must": [
          "exists": {
            "field": "field1"
          "exists": {
            "field": "field2"
          "term": {
            "fields.env.keyword": "env"
          "wildcard": {
            "fields.version.keyword": "version"

# Generates response with two hits that is what I would expect
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 14,
    "successful": 14,
    "skipped": 0,
    "failed": 0
  "hits": {
    "total": 2,
    "max_score": 3.0246916,
    "hits": [

But running the same query with python or HTTP, the response generates hits that do not contain the field1 and field 2 like below

  'took': 1,
  'timed_out': False,
  '_shards': {
    'total': 1,
    'successful': 1,
    'skipped': 0,
    'failed': 0
  'hits': {
    'total': 60,
    'max_score': 1.0,
    'hits': [
    '_index': 'index',
    '_type': 'type',
    '_id': 'AWBJgWhVlxkJMdeISVic',
    '_score': 1.0,
    '_source': {
        'fields.version': 'version',
        'fields.env': 'env',
        'fields.module': 'module',
        'fields.time': '2017-11-21T14:00:30',
        'fields': {
            'class': 'class',
            'codec': 'json',
            'index': 'index'
        'fields.type': 'type'

The template is like below

"template": {
    "aliases": {},
    "mappings": {
        "_default_": {
            "numeric_detection": true
    "order": 0,
    "settings": {
        "index": {
            "mapping": {
                "total_fields": {
                    "limit": "10000"
            "number_of_replicas": "0",
            "number_of_shards": "1",
            "refresh_interval": "2s"

(Heikki Laaksonen) #2

This was solved. With the best current understanding, the problem was incorrect usage of Python Elasticsearch library combined with incorrectly configured nginx reverse proxy. This caused the Elasticsearch REST request from Python to be routed to Kibana. Kibana actually responses but it does not process the query body (or the body was not received correctly through nginx).

For me, using location like below for Elasticsearch in nginx and the '$is_args$args' was needed to pass the query parameters like the scroll_id.

location ~ /elastic(/|$)(.*) {
        set $elasticsearch_servers elasticsearch;
        proxy_pass http://$elasticsearch_servers:9200/$2$is_args$args;

The second issue was incorrect usage of Elasticsearch Python library where the url_prefix was not used correctly. I tried to use this in the host setting like 'Elasticsearch([''])' which lead to default location in nginx that was Kibana.

>>> from elasticsearch import Elasticsearch
>>> elastic = Elasticsearch([''], url_prefix='elastic')

(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.