Data Analysis with Kibana - Lab6.3

Hi
I am trying to understand lab 6.3 from the Data Analysis with Kibana course. The lab is about anomalies and how to discover what caused them. I am doing the self paced course and the solution video does go over some steps, but there is no voiceover to explain nor any written description of whats going on. Its not clear what the anomaly is all about.
Any help appreciated.
thanks!

Anomaly hunting isn't a pure science. There's an art to it as well. Here's a way you could go hunting.

First make sure that you have completed the dashboard exercise in Labs 6.1 and 6.2. Go to that dashboard and set the time filter to "Last 30 days" (or the time range that contains all the Apache logs data).

Anomaly 1
The first anomaly can be seen most prominently in visualization_2 where all of a sudden, about 2 weeks ago, there is increased activity in both the sum and average of the bytes field.

Take a closer look at the data in the time range before the anomaly. On the date histogram called visualization_2 click and drag to select a time range before the increase in activity – maybe around a week's worth approximately 3 weeks before today. Then scroll down to visualization_5 to see the names of the most commonly seen user agents in that time range. Most of them start with Mozilla, maybe one is called UniversalFeedParser.

Go back to the "Last 30 days" on your time filter. Now go back to visualization_2 and select a time range after the increased activity – maybe around a week's worth approximately 1 week before today. Then scroll down to visualization_5 again to see the new list of most commonly seen user agents.

This time, you'll see that there is a new user agent in the list called Chef Client. No matter which time range you choose in the last two weeks, you'll find Chef Client in the top 3.

Select Chef Client in visualization_5. This will create a filter at the top of the dashboard. Select the filter to "exclude" the Chef Client from your data set and view your time filter at "Last 30 days" again.

You should see that the increased activity has disappeared! It seems the activity seen by the user or users using Chef Client is at least worth some further investigation!

You can leave this filter on your dashboard and disable it for the remainder of the exercise. To make it easier to identify later, you can rename the filter to "Anomaly 1". Go to Edit filter then Create custom label.

Save the dashboard to save this filter.

Anomaly 2
The second anomaly is the large spike seen in visualization_1. Zoom into the time range that has the spike either by clicking and dragging a small time range around the spike, or by simply clicking on the bar with the spike.

Scroll down to the map in visualization_3 to find that Brazil is much darker than any other country. Notice that visualization_3 and visualization_1 both use the count of logs as its metric. So we can conclude that in that time range, we recorded an unusually high number of logs originating from Brazil.

Scrolling down further to visualization_6, we find that one particular IP address seems to have visited our webpage significantly more often than others. Click on that IP address to create a filter for it. Notice that the IP address is in Brazil. If you play the same game as before with Chef Client to exclude the filter, you'll notice the spike goes away.

Disable this filter and rename it as "Anomaly 2". Save the dashboard to save this filter.

Anomaly 3 & 4
The third and fourth anomalies are the two tall bars in the sum of bytes in visualization_2. Click and drag in the time range near the bars to zoom in.

Scrolling down you find there are several items we may want to investigate. The map in visualization_3 seems to indicate a lot of visitors from Indonesia. The user agent graph in visualization_5 implies that the user agent "-" is possibly suspect. Then visualization_6 gives us two possible IP addresses to investigate. You can create a filter for each of these items and try to look at the data again.

Let's look at the most frequent IP address first. If you create that filter and zoom back out on the time range ("Last 30 days") you will see that it seems to be a very possible culprit for the anomaly. Especially seeing that this IP address is only seen in exactly the short time range in question. Yet if you exclude the data, the offending bars still persist in visualization_2 .

Setting that filter aside for now, create a filter for the second most frequent IP address and do the same thing. We find that this IP address also appears only in the time range in question. But excluding it removes the two offending bars in visualization_2!

Let's investigate this further. First save the dashboard. Now "Pin" the filter with the second IP address. Then go to Discover.

In Discover, you should see that the filter with the second IP address is already enabled. Select some fields from the left panel to view the values in the Document Table. Let's take a look at bytes and request.

Notice that any visit with a large number of bytes attempted to access the file /files/logstash/logstash-1.1.9-monolithic.jar. Pin that filter and return to the dashboard to learn that the monolithic file was not only accessed by this one IP, but the one IP did try to access it many times.

A similar exploration into the first IP address will also reveal interesting behavior. While it clearly was not the cause of the two bars in sum of bytes, every request made follows the pattern "*google.com/humans.txt". Given the high frequency of events (2599 logs) in the short time span (about 4 hours), this was possibly a bot.

Anomaly 5
The fifth anomaly is the first noticeable spike in avg and sum of bytes in visualization_2. Click on the bar to zoom into the time range and you'll find one IP address stick out. Create a filter for that IP address and zoom back out to "Last 30 days". Now toggle between include/exclude for this filter to find that this IP seems to be the cause for this anomaly.

Further investigation into this IP's activities reveals that this was yet another IP that attempted to access the monolithic.jar file we encountered earlier.

Anomaly 6
The final anomaly is the first bump found in sum of bytes after anomalies 3 and 4 in visualization_2. Click on the bar to zoom in on the time range. Quickly two candidates emerge. The most frequently seen user agent in visualization_5 (Wotbox) and the most frequently seen IP address in visualization_6. Some clicking around reveals that the user agent is exactly that of the IP address. Further poking around in Discover will show that this user spent approximately 5 hours clicking around various pages. It is quite possible this was a very bored, click-happy, human being.

Hope that helped. Feel free to reach out again if you have any further questions :slight_smile:

1 Like

Thanks for the detailed response Mimi! It helped me understand the lab.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.