Shards failed warning on Network dashboard in SIEM app

Hi there,

I'm getting a shards failed warning in the Network dashboard of the SIEM app (v7.6). I'm unable to diagnose as when I click the "Show details" button, nothing happens. I used the inspect tool in Chrome and it cites an error each time I click ("Uncaught error: Overlays was not set").

Any ideas how I can get past this and see what the error is? I suspected it was a field name conflict in my Kibana index pattern, but when I refreshed it in the config, it didn't cite any conflicts. And again, I can't see the 2 offending shards because that button does nothing when I click on it. Thanks for any help!

If you have access to the Elasticsearch logs, it's probably easiest to find the error by looking there.

If not, maybe try to use the "Index Management" and/or "Stack Management" pages under Kibana Settings to see if any of your indices reports errors.

We'll be looking into why that "Show details" button doesn't work. I know it's annoying, sorry about that.

It's a hosted cluster (via GCP marketplace), so I only have access to a subset of the Elasticsearch log it seems. No kibana logs. I would need an SRE to install filebeat on the nodes as I don't have direct access to the nodes myself. I do have monitoring to another cluster set up but again, a push button to enable logging to the monitoring cluster would be nice.

Didn't see anything in the Index Management pages under Kibana. I refreshed the index pattern and hoped to see a conflict but there was none. I suspect it may have something to do with source.geo.country_iso_code as I'm not seeing anything in that table despite having logs where that field is populated. Again, if one of my indices were storing it as something other than a keyword, I would have expected to see a conflict when refreshing the index pattern, right?

I found an issue that may be related to this. About a week ago in response to this issue, I changed my template mapping for timestamps to use epoch_second. Logs were previously stored like this:

"@timestamp": "2020-02-03 23:59:00"

And now they're stored like this:

"@timestamp": 1582070340

I just realized that since making that change, none of those logs are being returned in the queries being made by the visualizations. The reason is because they filter for timestamps using epoch_millis, like this:

{
  "aggregations": {
    "top_countries_count": {
      "cardinality": {
        "field": "source.geo.country_iso_code"
      }
    },
    "source": {
      "terms": {
        "field": "source.geo.country_iso_code",
        "size": 10,
        "order": {
          "bytes_out": "desc"
        }
      },
      "aggs": {
        "bytes_in": {
          "sum": {
            "field": "destination.bytes"
          }
        },
        "bytes_out": {
          "sum": {
            "field": "source.bytes"
          }
        },
        "flows": {
          "cardinality": {
            "field": "network.community_id"
          }
        },
        "source_ips": {
          "cardinality": {
            "field": "source.ip"
          }
        },
        "destination_ips": {
          "cardinality": {
            "field": "destination.ip"
          }
        }
      }
    }
  },
  "query": {
    "bool": {
      "filter": [
        {
          "bool": {
            "must": [],
            "filter": [],
            "should": [],
            "must_not": []
          }
        },
        {
          "range": {
            "@timestamp": {
              "gte": 1582070330,
              "lte": 11582070350
            }
          }
        }
      ]
    }
  }
}

Again, notice this part:

          "range": {
            "@timestamp": {
              "gte": 1582070330,
              "lte": 11582070350
            }
         }

That returns nothing. When I switch it to instead use epoch_seconds, it returns my logs.

I'm still not sure if that's related to this issue - you think I should open another ticket for that?

Can you post your mappings for the @timestamp field, please? And maybe a sample document as returned by the Elasticsearch API (or copied from discover)? I would expect that as long as the field has a date type, the query would still work well.

I had another thought about the Network page: is it possible to click the "Inspect" buttons from the top of the widgets? If that works, it might allow us to identify which one of the queries trips over.

Mappings for @timestamp are as follows. I recently switched to using epoch_second:

"@timestamp": {
        "format": "yyyy-MM-dd HH:mm:ss||epoch_second",
        "index": true,
        "ignore_malformed": false,
        "store": false,
        "type": "date",
        "doc_values": true
      },

Here's an example document:

{
        "_index" : "some-index-2020.02.16-000014",
        "_type" : "_doc",
        "_id" : "AR7AVnABzoWS80HG3IVd",
        "_score" : 0.0,
        "_source" : {
          "destination" : {
            "geo" : {
              "continent_name" : "North America",
              "region_iso_code" : "US-WA",
              "city_name" : "Seattle",
              "country_iso_code" : "US",
              "country_name" : "United States",
              "region_name" : "Washington",
              "location" : {
                "lon" : -102.2432,
                "lat" : 27.24
              }
            },
            "as" : {
              "number" : 12345,
              "organization" : {
                "name" : "Google, Inc."
              }
            },
            "port" : 443,
            "bytes" : 1234,
            "ip" : "1.2.3.4"
          },
          "source" : {
            "bytes" : 1234,
            "ip" : "192.168.1.1"
          },
          "firewall" : {
            "logs" : {
              "rule" : "interwebz",
              "url" : {
                "type" : "troubleshooting"
              }
            }
          },
          "frequency" : 4,
          "tags" : [
            "help",
            "please"
          ],
          "network" : {
            "application" : "ssl",
            "bytes" : 1234,
            "transport" : "tcp"
          },
          "observer" : {
            "hostname" : "something.com"
          },
          "@timestamp" : 1581983940,
          "event.module" : "firewall",
          "related" : {
            "ip" : [
              "1.2.3.4",
              "192.168.1.1"
            ]
          },
          "event" : {
            "dataset" : "firewall.logs",
            "outcome" : "permit"
          }
        }
      }

Again, since storing the logs in epoch_second format, none of the visualizations in the SIEM app show my logs. I do see these logs in Discover, Dashboards, and Visualizations. When I hit inspect and run the query against the console, it returns nothing (even though I definitely have logs for that time period). I get it to return my logs when I include "format": "epoch_millis" in the queries:

image

Note that the Inspect tool on all the visualizations does not include this format field:

image

And for comparison, the Discover page Inspect includes the format with the range (albeit a different format):

image

Not sure if this is related to my issue or a completely separate issue altogether.

Pinging this thread, thanks! Please let me know if you have other ideas for troubleshooting.

The format of a date data type field determines which formats are understood during indexing, but also which date formats can be used for querying.

If the field is set to epoch_seconds, only the numerical form is allowed. The default is to allow for ISO8601 formatted strings as well as epoch_millis. Kibana assumes one of these and fails with epoch_second.

If the time format absolutely has to be epoch_second, I'd recommend setting the format string to something like "strict_date_optional_time||epoch_second". Alternatively (and preferred IMO), the values could be adjusted (seconds multiplied by 1000) to be in epoch_millis. Mixing epoch_second with epoch_millis is not possible.

I would recommend for now wconnell that you switch over to using:

strict_date_optional_time||epoch_millis

and re-mapping and re-indexing. This is the default for Elastic Search when it does its dynamic date mappings:
https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html#date-detection

When you insert data, you can insert your data as epoch_millis or as an ISO8601 standard which would be something like:

2020-03-02T23:55:17.303Z

I would suggest inserting your data into your index as ISO8601 with zulu as that is the path tested the most by us as it is the default Elastic Search choice for date times.

However, inserting your data as epoch_seconds will be really bad for the SIEM application to interpret things as it is making an assumption that what you have is Epoch in milliseconds and not in regular seconds when it sees numbers. Magnus_Kessler is correct in the assumption that are expecting milliseconds and not seconds when it comes to Epoch.

Magnus_Kessler, you are correct that there is a difference between how the SIEM application is processing date times vs how Discover is processing date times with regards to both input and output. We are tracking both of these in these two tickets at the moment and put notes on both of them for this:


We are hoping to make things in the future more mapping agnostic and friendly with date timestamps but for now I would re-index and re-map using:

strict_date_optional_time||epoch_millis

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.