How to query Kibana for HTTP endpoint metrics

I am having a hard problem. Logstash+Kibana are being used for logs but these logs are also rich and full of metrics.

In particular NGINX HTTP endpoint metrics.

I am having a difficult time crafting the right python queries to get the metrics I need to extract.

For example, in Kinana I have the string
GET "/user/685934468/api/v1/tables/funds_raised_map_by_region_1

What I need is

  1. String must contain /user/
  2. String must contain /api/v1/tables/
  3. String must_not contain any known datasets in known_array = []

I need to first pull the right logs but I am missing lots of valid entries. Here is my query string on the Kibana searchbox

/user/ AND /api/v1/tables/ AND NOT (funds_raised_map_by_region total_solar_eclipse_2017_local_time OR shared_empty_dataset OR factories)

This returns the right data (near as I can tell) in the Kibana UI.

However when I look at the underlying HTTP call being sent, and I copy/paste the below query into Python and send

import elasticsearch

es = elasticsearch.Elasticsearch('http://' + self.host_name + ':' + str(self.port_number))
results = es.search(
    index=elastic_index,
    body={
        'size': 100000,
        'query': {
            'filtered': {
                'query': {
                    'query_string': {
                        'default_field' : 'message',
                        'query': sQuery,
                        'analyze_wildcard': 'true'
                    }
                }
            }
        }
    }
)

Here is the Elastic search request object.

{
    "index": ["development-2018.02.13"],
    "ignore_unavailable": true
}{
    "size": 500,
    "sort": [{
            "@timestamp": {
                "order": "desc",
                "unmapped_type": "boolean"
            }
        }
    ],
    "query": {
        "filtered": {
            "query": {
                "query_string": {
                    "query": "/user/ AND /api/v1/tables/ AND NOT (funds_raised_map_by_region total_solar_eclipse_2017_local_time OR shared_empty_dataset OR factories)",
                    "analyze_wildcard": true
                }
            },
            "filter": {
                "bool": {
                    "must": [{
                            "range": {
                                "@timestamp": {
                                    "gte": 1518498000000,
                                    "lte": 1518584399999,
                                    "format": "epoch_millis"
                                }
                            }
                        }
                    ],
                    "must_not": []
                }
            }
        }
    },
    "highlight": {
        "pre_tags": ["@kibana-highlighted-field@"],
        "post_tags": ["@/kibana-highlighted-field@"],
        "fields": {
            "*": {}
        },
        "require_field_match": false,
        "fragment_size": 2147483647
    },
    "aggs": {
        "2": {
            "date_histogram": {
                "field": "@timestamp",
                "interval": "30m",
                "time_zone": "America/New_York",
                "min_doc_count": 0,
                "extended_bounds": {
                    "min": 1518498000000,
                    "max": 1518584399999
                }
            }
        }
    },
    "fields": ["*", "_source"],
    "script_fields": {},
    "fielddata_fields": ["@timestamp"]
}

When you're searching URLs it is important to know how they are analyzed by Elasticsearch. If you're not sure what analysis means in this context, I suggest you checkout this general explainer and the docs for the specific analyzer you're using. Make sure that you're indexing the URLs in a way that you can properly search on them. I think you're probably getting a bit lucky right now.


The query that Kibana sends to Elasticsearch is using the query_string type because Kibana doesn't understand the specifics of what you want to search, and is just exposing the raw lucene query string syntax to users for ease of use. When you're crafting a query for elasticsearch it is generally better to use the JSON Query DSL, especially when you're using a script to craft the query. This is how you can format your exact query with the JSON DSL:

{
  "query": {
    "bool": {
      "must": [
        { "match": { "FIELD": "/user/ " } },
        { "match": { "FIELD": "/api/v1/tables/" } }
      ],
      "must_not": [
        { "match": { "FIELD": "funds_raised_map_by_region" } },
        { "match": { "FIELD": "total_solar_eclipse_2017_local_time" } },
        { "match": { "FIELD": "shared_empty_dataset" } },
        { "match": { "FIELD": "factories" } },
      ]
    }
  }
}

Using the match query type analyzes the search term using the analyzer of the target field, or the default search analyzer if the field doesn't have an explicit analyzer set.


All that said, the only way I know to properly "comprehend" a URL is to parse it, and to do that you will probably need to use regular expressions.

If you update your Logstash config to perform the regular expressions at index time and add helpful fields like is_tables_api_request and tables_api_requested_dataset to relevant log events, which would be trivial to search on.

Using terms+bool queries in JSON DSL might look something like this:

{
  "query": {
    "bool": {
      "must": [
        { "term": { "is_tables_api_request": true } },
      ],
      "must_not": [
        {
          "terms": {
            "tables_api_requested_dataset": [
              "funds_raised_map_by_region",
              "total_solar_eclipse_2017_local_time",
              "shared_empty_dataset",
              "factories"
            ]
          }
        }
      ]
    }
  }
}

If you would rather not store these parsed value, or want to perform the regex at query time, you can perform a regular expression query on the URL if the URL or message is stored as a single term (meaning it's not analyzed, or of type "keyword"). You will pay a performance penalty at search time as each term must be evaluated against the regular expression, but if you're not looking to make long-term changes to your data this might be a better approach.

Using a regular expression at query time might look something like this:

{
  "query": {
    "regex": {
      "FIELD.keyword": "GET \\/user\\/\\d+\\/api\/v1\/tables\/(?!funds_raised_map_by_region|total_solar_eclipse_2017_local_time|shared_empty_dataset|factories)(?:[a-z0-9_-])"
    }
  }
}

A bit more about analysis:

Assuming you're not specifying a specific analyser for your log message, it is probably using the standard analyzer which is converting GET "/user/685934468/api/v1/tables/funds_raised_map_by_region_1 to the terms get user 685934468 api v1 tables and funds_raised_map_by_region_1.

The query you're entering into Kibana includes / marks, but it is also being analyzed, so it will match the terms user, api, v1, and tables regardless of where they are in the url or what order they fall in.

This is what I mean by "you're probably getting a bit lucky right now".

I am using Python to query kibana to get the "logs of interest" but then I run through a regex to extract the username and dataset.

self.regex_string = '.*GET.*/user/([0-9]+)/api/v1/tables/([0-9a-zA-Z_\-]+).*'
p = re.compile(self.regex_string)
m = p.search(my_message)
if m:
    # Good regex result. Do real work.

If the regex doesnt match it skips that resulting query.

I got some of this to work but not on an NGINX log. I have a NODE.JS endpoint that NGINX forwards to and running the elasticsearch on that get the stats I need. But not the NGINX log. The problem is that NGINX is the 1 source of truth where as the NODE.JS service is, well, dozens of them.

This is the NGINX string to send to the GET _analyze function.
123.23.128.202 blah-blah.prod.bloomberg.com - [21/Feb/2018:14:52:27 +0000] "GET /user/111286253/api/v1/tables/major_flow_points_1 HTTP/1.1" 200 1301 0.331 "http://blah-blah.prod.bloomberg.com/user/111286253/viz/7bd8dcf0-0c2e-11e8-a9ce-0242a9fe0402/map" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.109 Safari/537.36" "-" "19.0.99.255" "-" "-" "upstream: 10.246.189.11:8080" "0.331"

My must_not clause is in the 200's and elasticsearch just doesnt return it properly against all the NGINX logs.

Do you have more information on configuring Logstash+regex to make the is_tables_api_request and tables_api_requested_dataset fields you mentioned? Creating fields at index time and searching those seems the only way to scale the solution.

If I can reproduce the regex matching above and create the fields if the regex matches then I am golden.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.