I have a visualization table that is filtering Windows events from Winlogbeat, to show only events with code 4625 "Failed account log on". This dashboard has a default time filter to show only the last 4 h of data. Checking today, I have 2 events logged this morning at 9:49 am and 8:28 am, but if I change time filter from 4h to 24h, the table will not show the two events from this morning and it will only show events from yesterday.
I would expect that 24h time filter will show same events that 4h time filter plus many other events logged yesterday.
I have checked the time filters to make sure it is from yesterday 1:30 pm til today same time, which should include this morning events.
I also tried rounding the date in the time filter. Same results.
If you go to Discover and use a KQL Filter and the 4 Hour and 24 hour time Filter do you see the events that you expect to see?
Which Visualization Are you using can you provide a screen shot of the 4 Hour and 24 Hour?
Also keep in mind Data in Elasticsearch is Store in UTC and Displayed in your local timezone assuming you have not changed any defaults.
Also just to be extra Clear 24 hours should be previous 24 hours but if you to Today that is from from 12:00 AM to 15:59:59 PM of the current day, that can sometimes trip people up.
Thank you for attending my request. Tested a KQL filter in Discover for the last 8h:
event.code : 4625
The above filter shows me the desired events from this morning. Before using time filter 4h, now it is 8h.
After changed the time filter to 24h, I still can see the same events from this morning plus many from yesterday. We're still in Discover app. So time filter seems to be working fine here.
Went to the visualization (table). Please see screenshots below, showing both time filters. Please note how events from today May 07, disappear when change time filter to 24h, then shows only events from yesterday May 06.
As for the time frames, I know it could be tricky sometimes, but this is how I think it works. Please correct me if I'm wrong:
Today: 12:00 am of current day to current day and time running the query (3:40 pm for this example) Last 24h: (yesterday) May 06 3:40 pm to (today) May 07 3:40 pm. This time frame should keep moving depending the time when the query is triggered.
Can you show the date histogram and the filter options ...
Also I am a bit confused because you say you are using only 1 filter and I see several across the top and then a filter in the table definition itself with 1 picture disable 1 picture enabled not sure if all the pictures are valid.
I guess I would just start a simple count with date histogram and put the filter in the KQL line at the top and see if the numbers make sense.
Also something is strange about your timestamps in the rows they should be equal buckets I not sure why you are seeing individual timestamps... Like for 8 hours it should be 30 Minute Buckets if you left the interval as auto
The filters you can see on the top of the visualization is to remove some system accounts from the results (around 5 logins), which does not provide any useful data for us. But this should not represent an issue or impact in the problem I'm experiencing right now, as the accounts shown for the 8h time filter are not included in the login filter, therefore are shown when filtering by the last 8h and should appear as well when changing time filter to the last 24h.
I don't have access to the machine were I was running the tests last Friday, so I am running the same tests on a different one. Both nodes uses the same Kibana objects (visualizations, dashboards, etc.)
Notes: Running ELK 7.12.1 + Winlogbeat 7.12.1
Same time filter issue is occurring here.
As a side note, I was not using a Date Histogram aggregation, but just Terms with field @timestamp, as you can see below.
Now, just for troubleshooting, I can add a Date Histogram aggregation to check the data. Please do not hesitate and tell me what do you need me to try or test, so you can have enough data for your investigation.
Generally it's not a good idea to do a terms aggregation on a timestamp (which is what you are doing) that's really not what it's intended for and potentially why you're getting strange answers. Terms aggregations should be used against keyword types typically not dates... That's not to say in some rare cases you could but doesn't seem like what you're trying to accomplish or perhaps I'm wrong.
Perhaps You should do a date histogram and then filter on the terms like the event you're looking for. That is what I showed above.
Data tables are are based on aggregations and aren't really geared towards individual results.
Imagine when you have thousands of logins you will see that you've have 250 matching events in this 15 minute buckets and 375 matching events in the next 15 minutes bucket etc.
It's quite possible I have completely misunderstood what you're trying to accomplish.
Hi @stephenb and thank you for your advice. Initially I did not use a Date Histogram for this table, because I was expecting to receive event at random time frames. Let's say that maybe I have 3 logins in 1 minutes and then 3 logins in 3h. That's why I thought the timestamp aggregation would work better.
That been said, I'll try with a Date Histogram to see how it goes and will share feedback soon
I have tried the Date Histogram and I don't think it will fit my needs (unless I'm missing something). I set a Date Histogram and set the "Minimum interval" to Auto, then disabled the Timestamp aggregation. Below you can see the result, where events are grouped in 12h intervals, but are missing the exact time of each event. For security purposes, I need to show the exact time of the event in the table, instead of how many events per time unit.
On the other hand, the Timestamp aggregation, does shows me the data I need, as you can see below, but it misses events. We can see that with Date Histogram we have dates Apr 30, May 03, May 11, but with Timestamp aggregation we only have dates Apr 30, May 03.
Perhaps we should start at the beginning we jumped right in the middle.
I'm still not exactly clear what you're trying to do ....
From the above statement it appears you're trying to show individual events with individual timestamps, if so data table is most likely not the correct visualization the data table is meant for aggregations not individual events...
Suppose you have 3000 events over the course for 24 hours are you going to show 3,000 lines?
Or is it more like you're going to have tens of events spread out over 30 days.
But long story short if you want to show individual events it's probably not not going to be a data table (not to say you might be able to find some way to hack it and bend it to your will but it's not the purpose).
If you want to show individual events that's really just a basic saved search with the columns that you want to show.
To do that go to Discover apply your filters, pick and arrange the columns then save that search.
You can put that saved search in a dashboard to and export the data to CSV if you want.
The other completely different thought have you actually just looked at the Security app within Kibana? if you're using winlogbeat that should be automatically populated with your logins, users and hosts and IPs and you can still do filters and KQL bar. Etc.
I appreciate the time you are dedicating to understand my issue and trying to find the way to help me better.
What I have is a dashboard that monitors many System and Security events from Windows machines. I also use the Endpoint Agent, so I am aware that the Security app, has something similar, which looks great and promising, but my custom dashboard collects many other Windows events that we need and are organized in a different way, which is very useful for us.
The particular visualization we are discussing on this topic is a table, which is intended to show only the failed login attempts (event ID 4625). All of them, yes. The default time filter for the entire dashboard, where the visualization is included, is 4h. Of course, if needed, we can expand the time frame to include even more events, like the last 24h or so, although this is only necessary for an investigation or if we notice some abnormal behavior.
The table is set to show only 10 rows per page (Max rows per page = 10), although it can have many pages, which is fine, as we don't need to see all the 3000 events at once, but only small groups of 10 or maybe 20, sorted by date, user, machine, etc. And then we can work using those portions of data.
I'll try using the Saved Search and will share feedback soon.
All events are shown (8) for the time frame set (30 days)
If I reduce the time frame, no new events that could have been missing appears, but just hides those out of the time range
The names of the columns are not user friendly. Is there a way to customize these fields?
I can't adjust the width of the columns like I usually do in tables.
By using a Saved Search I fix the missing events issue, but other issues are introduced instead.
So I continued researching and I think that I have found a way to fix the Table by adding a new bucket of type Date Histogram. Instead of replacing the Timestamp aggregation with a Date Histogram, I have combined both in the same table. It looks like the Date Histogram forces to include all the events if placed as first in the buckets. Below you will find the screenshot and comments:
All events are shown (8) for the time frame set (30 days)
If I reduce the time frame, no new events that could have been missing appears, but just hides those out of the time range
The names of the columns are user friendly because they are fully customizable (just labels).
I can adjust the width of the columns to avoid trimming the data that I need to show and also prioritize the width for those columns containing more relevant info.
I have an extra column with duplicated data "Date" (Date Histogram) , as the Timestamp column already includes the date
Please share your thoughts about my findings. I am open to advises and to try new combinations.
If you got the Data Table to work I would go with that.
That is a good solution, good job I learned something too... I did not think of the histogram then break down by timestamp
You will have that extra timestamp first column not sure what to do about that... you could shrink it to the left as far as possible.
With respect to saved Search :
The names of the columns are not user friendly. Is there a way to customize these fields?
Yes you can do that in the in the Index Pattern and Edit those Fields and add a Custom Label but then that friendly name will show is a number of places Discover, Maps etc.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.