Elaticsearch kibana ml jobs issue

I have created one kibana ml jobs to alert when low event rate and i have ingested data using the audit beat continuously and stop the ingestion but event is not getting triggered.how can i solve this issue.

Most likely, you didn't allow the "normal" behavior to be learned long enough before triggering your test.

More info on what you tried would be helpful here

basically i have created an anomaly job for event low rate.my aim is to trigger when a low event rate occurs. i have used

single metric job and choose low count(Event rate as field) . i created a real time running job and watch for warning (score 0 and above) which is run every 15 minutes.i have indexed the data continuously and data flow is available in graph but the anomaly is not identified and alert is not fired.can you help me to solve the issue.

What you describe in words is the correct approach, but something must be wrong with your setup here - meaning, whatever index you pointed ML at, or how you're querying it, is not correct. As seen in the screenshot above, the majority of the observations for the count are zero - it is zero almost the entire time (except for the two bumps in the graph). I assume the index that you have ML looking at actually does have data in it?

Can you provide the details of your datafeed configuration on the ML job and also perhaps a screenshot of the index that you're monitoring with ML from the Discover tab in Kibana?

I'd like to see:

and

Redact any sensitive information as necessary

please have a look added screenshot of Discover tab and ml data-feed

I'm a little confused here. Previous screenshots were for a job name ml-latest-job, but this is for a job named ml-job-auditbeat.

Can you please provide consistent information? It would make things easier to help you.

Assuming these jobs are similar, however, and if you really are looking for low_count of events in the index auditbeat-* and if that index really does have data flowing into it in a timely manner (that doesn't have excessive ingest delays) then you should be just fine.

However, clearly something in your setup is incorrect, so we need to find it.

First, to rule out a real-time ingest delay issue, let's walk through the config of a new job and see what you get along the way

Step 1) Create a new ML job, pick auditbeat-* as the index, and choose Single Metric as the job type.

Step 2) In the following screen, click the "Use full auditbeat-* data" button. It should look something like this:

If it DOES NOT look something like the above (i.e. has a set of bars representing index volume over time similar to what you'd see in Kibana Discover), then stop here and let us know.

Step 3) If it does look similar to the above, then feel free to continue the configuration (by pressing the Next button) and picking Low count (Event rate) from the dropdown

Step 4) Click Next, and name the job:

Step 5) Click the Next button twice and then the "Create Job" button. You should see ML run and point out potential anomalies in the data:

(you can see in the example, that the period of lower volume was indeed flagged as anomalous)

Step 6) Optionally, click "Start job running in real-time"

Let me know which of the above steps do not work for you.

Hi i have created new ml job (auditbeat-test-ml) for low count and which is pointing to auditbeat-*

step1


step2

step3
screenshot-localhost_5601-2020.10.20-19_41_38
step4

Finally created watcher for anomaly score for minor( 25 and greater).

data feed of newly created job
screenshot-localhost_5601-2020.10.20-19_47_48

Great! Looks like that worked. Now, keep in mind that you only have less than 24 hours worth of data in that auditbeat index (earliest data seems to be today, at 17:00 hours in your time zone).

Now - let this job run for a few days.

hai
Do we have any proper documentation toadd this ml job in siem detection
can you give spme instruction to add ml job in siem detection. my aim is to trigger alert from siem detection when low event rate anomoly occurs

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.