Vertical Bar Unique Count all counts = 1

I'm trying to visualize dns log data in a vertical chart. X-axis should be remote_host. Y-axis should be count of remote_host.

I do this by setting Data -> Metrics -> Y-axis -> Aggregation to Unique Count and Field to remote_host. Then, I set Buckets -> X-axis -> Aggregation to Terms and Field to remote_host.

It produces a histogram but all the counts are always equal to 1. I've confirmed that the counts should not all equal one with this query:

{
  "_source": ["@timestamp", "client_ip", "remote_host"],
  "size": 0,
  "sort": [
    { "@timestamp" : {"order" : "asc"}}
  ],
  "aggs": {
    "remote_hosts": {
      "terms": {
        "field": "remote_host",
        "size": 10000
      }
    }
  },
  "query": {
    "bool": {
      "must": [
        {"wildcard" : { "client_ip" : "#{@client_ip}" }},
        {
          "range": {
            "@timestamp": {
              "gte": "#{@start_date}",
              "lte": "#{@end_date}"
            }
          }
        }
      ]
    }
  }
}

@start_date, @end_date, etc are ruby variables that are replaced with values before the query is sent to elastic.

I'm not familiar with the Vertical bar visualization, but will note the Lens visualization might be able to provide this - and let you easily explore other visualizations of that data - see Dashboard | Kibana Guide [7.11] | Elastic

Also a little curious about the ruby variables. Are you scripting the building of the visualizations? I'm wondering if the interpolation of the ruby variables by your process might somehow be messing something up.

@Patrick_Mueller I think the issue is with my indexed data. The ruby query above works fine, and returns counts of remote_host, but when I try to use the index to create a lens in kibana, I get this message: No fields exist in this index pattern.

Interesting. Could you show us the mappings for this index? And I guess what the index pattern looks like in the Kibana UI? I'm guessing the existing mappings are for fields that Lens can't deal with, so I'm curious what those might be.

Also, what version of the elastic stack are you using?

@Patrick_Mueller here are the mappings

{
  "mylogs-2021-03-10" : {
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date",
          "format" : "date_time"
        },
        "@version" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "client_ip" : {
          "type" : "text"
        },
        "facility" : {
          "type" : "long"
        },
        "facility_label" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "host" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "message" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "priority" : {
          "type" : "long"
        },
        "remote_host" : {
          "type" : "text"
        },
        "severity" : {
          "type" : "long"
        },
        "severity_label" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "syslog_event_id" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "syslog_timestamp" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "tags" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "type" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        }
      }
    }
  }
}

Index pattern:

mylogs-*

Also when I do a basic search GET mylogs-2021-03-10/_search the data has this format (IPs redacted):

{
  "took" : 872,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 10000,
      "relation" : "gte"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "mylogs-2021-03-10",
        "_type" : "_doc",
        "_id" : "KlJhHHgBE0Wkqp67vQIq",
        "_score" : 1.0,
        "_source" : {
          "remote_host" : "<redacted_fqdn>",
          "client_ip" : "<redacted_client_ip>",
          "type" : "syslog",
          "priority" : 0,
          "facility" : 0,
          "facility_label" : "kernel",
          "host" : "<redacted_server_ip>",
          "message" : "<30>Mar 10 12:51:07 unbound: [58351:0] info: <redacted_client_ip> <redacted_fqdn> A IN",
          "@version" : "1",
          "severity_label" : "Emergency",
          "severity" : 0,
          "@timestamp" : "2021-03-10T13:44:30.280Z",
          "syslog_timestamp" : "Mar 10 12:51:07",
          "syslog_event_id" : "30"
        }
      },
...
}

additional info:

Here is how I created the index template:

PUT _index_template/mylogs-template 
{
  "index_patterns": ["mylogs-*"],
  "template": {
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date",
          "format": "date_time"
        },
        "remote_host": {
          "type": "text"
        },
        "client_ip": {
          "type": "text"
        }
      }
    }
  },
  "priority": 200
}

and here is my my logstash conf (pipelines.yml):

input {
  syslog {
    port => <redacted>
    host => "<redacted>"
    type => "syslog"
    grok_pattern => "<%{NUMBER:syslog_event_id}>?%{SYSLOGTIMESTAMP:syslog_timestamp}%{DATA}%{IP:client_ip} %{HOSTNAME:remote_host} %{GREEDYDATA}"
  }
}

filter {
  if [loglevel] == "debug" {
    drop { }
  }
}

output {
  if "_grokparsefailure" not in [tags] {
      elasticsearch {
        hosts => [ "<redacted>" ]
        user => "<redacted>"
        password => "<redacted>"
        index => "mylogs-%{+YYYY-MM-dd}"
      }
  }
}

All of that looks fine to me, at first glance. To use that index with Lens, you'll need to have an index pattern created for it. Create an index pattern | Kibana Guide [7.11] | Elastic

What fields does the index pattern show?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.