Hi
In the detections, rules created a custom rule for my own index. A simple match query is not working, but the same rule is working with an aggregate query.!
detections|690x283
The rule with custom query externalId: "4625" is not working. But the same rule with threshold query is working. With externalId : "4625" and destinationAddress >= 1 is working.
Can anyone help, please.
There's an in-depth explanation of parts fo the threshold rules with regards to fields that are aggregatable vs non-aggretable here that should help you out for trouble shooting issues:
Hi,
I 'm experiencing the same problem that @Anirudhan.
I've create a custom (very simple rule) on a custom index containing Fortinet Logs and it is not generating signals (although many events exists) . (See below)
As far I understand this is a diferent situation than in Threshold rules not triggering on selfmade index , in fact threshold rules are working but custom querys not
Non-working Detection rule
When a create a threshold rules with the same query, events are generated
Working Threshold rule
Thank you
Regads
Anna
I think I might have misread the original post as having an issue with thresholds but not regular query @Anirudhan and @Anabella_Cristaldi. That's my bad there.
If you have non-ECS based source indexes then that would contribute to these signal issues where one type of rule might work but another will not work. You will want to audit your custom source index mappings to see if they're ECS compliant or not and ensure they do have mapping conflicts.
This sounds very probable in your current situation. We have noticed this recently on several different systems from users.
I can help you here do an audit here on the forum. Not a biggie. If you're ok with it, you can post your mappings of fort-traffic or other problematic source indexes with a query in dev tools like so:
GET fori-traffic*/_mapping
And then a sample test record (not real data) from your source index (please don't post any production or live data and do not violate any rules/laws by posing a data sample).
Using your test data cluster only, you would give me the first record or two like so from dev tools
GET fori-traffic/_search
Long term for the upcoming release we have improved bubbling up error conditions to the front end in several areas which should make finding and fixing these problems faster as well as eliminating several possibilities from users very quickly:
PR fixes that will show up in an upcoming release:
Hi @Frank_Hassanabad
In my case my fortinet's doc are normalized into ECSs (I needed to correlate with some other logs like windows and netapp logs) .
I did modify the custom rule in order to use the ECS field (event.code instead of logid) and still does not work. Threshold rule still working.
As you can see in my mapping there are several ECS fields (source.*, destination.*,event.*,network.*, ecs.*, related.*, etc) FortiTraffic Mapping
II do not see why is working for threshold but not for custom query (Does they have a different mechanism for quering)?
Below an obfuscated sample data
Thank you!
Regards
{
"_index" : "forti-traffic-2020.10.20-000002",
"_type" : "_doc",
"_id" : "ck2OTHUB380qW0uEKd9F",
"_score" : 1.0,
"_source" : {
"subtype" : "forward",
"destination" : {
"bytes" : "5567",
"ip" : "18.132.239.61",
"port" : "443",
"packets" : "14",
"geo" : {
"longitude" : -97.822,
"ip" : "18.132.239.61",
"latitude" : 37.751,
"timezone" : "America/Chicago",
"country_iso_code" : "US",
"continent_code" : "NA",
"country_name" : "United States",
"country_code3" : "US",
"country_code2" : "US",
"location" : {
"lat" : 37.751,
"lon" : -97.822
}
}
},
"related" : {
"ip" : [
"18.132.239.61",
"192.168.11.11"
],
"user" : [
"fakeuser",
"fakeuser"
]
},
"sentdelta" : "1824",
"action" : "close",
"dstip" : "18.132.239.61",
"source" : {
"ip" : "192.168.11.11",
"port" : "61021",
"address" : "fakeuser",
"packets" : "14",
"geo" : {
"location" : {
"lat" : "2.5",
"lon" : "1.4"
},
"country_iso_code" : "XYZ"
},
"user" : {
"name" : "fakeuser"
},
"bytes" : "1824",
"mac" : "aa:aa:aa:aa:aa:aa"
},
"event" : {
"type" : "access",
"dataset" : "traffic",
"action" : "close",
"category" : "network",
"code" : "0000000013",
"duration" : "61",
"module" : "DC",
"outcome" : "TBD"
},
"devid" : "FG6Hxxxxxxxxx",
"dstintfdesc" : "int1",
"path" : "/some_path",
"type" : "traffic",
"session_status" : "ended",
"srcintfdesc" : "int2",
"observer" : {
"serial_number" : "FG6Hxxxxxxxxx",
"ip" : "192.168.22.22",
"name" : "FWDummy"
},
"duration" : "61",
"dstintfrole" : "undefined",
"rcvdpkt" : "14",
"unauthuser" : "fakeuser",
"direction" : "internal",
"tz" : "+0200",
"fortios" : {
"service" : "HTTPS"
},
"vd" : "DC",
"dstcountry" : "United Kingdom",
"eventtime" : "1603307185652500897",
"sentpkt" : "14",
"srcport" : "61021",
"proto" : "6",
"network" : {
"session_id" : "14664",
"iana_number" : "6",
"packets" : 28,
"bytes" : 7391,
"protocol" : "HTTPS",
"transport" : "TCP"
},
"level" : "notice",
"dstport" : "443",
"unauthusersource" : "kerberos",
"@timestamp" : "2020-10-21T19:06:25.000Z",
"rule" : {
"id" : "222",
"type" : "policy"
},
"rcvdbyte" : "5567",
"appcat" : "unscanned",
"user" : {
"name" : "fakeuser"
},
"srcname" : "PC.domain.fake",
"host" : "srv",
"sessionid" : "14664",
"tags" : [
"notags"
],
"devname" : "FWDummy",
"srcintfrole" : "lan",
"sentbyte" : "1824",
"osname" : "Windows",
"mastersrcmac" : "aa:aa:aa:aa:aa:aa",
"srccountry" : "Reserved",
"srcintf" : "int2",
"srcmac" : "aa:aa:aa:aa:aa:aa",
"service" : "HTTPS",
"srcswversion" : "10",
"poluuid" : "id",
"@version" : "1",
"logid" : "0000000013",
"srcip" : "192.168.11.11",
"srcserver" : "0",
"dstintf" : "int1",
"policytype" : "policy",
"trXYZisp" : "noop",
"policyid" : "222"
}
}```
Appreciate the sample thank you!
I do not see why is working for threshold but not for custom query (Does they have a different mechanism for quering)?
Yes, they do have different mechanisms. The threshold one is an aggregation and it does not fill in all the values when it creates a signal.
So, some good news is that in the soon to be released 7.10.0 where we improved error handling you will begin to see errors on that rule where before you were not. I just test ran that sample document off of Kibana master and here is the error:
It's pointing to your data set at host
:
"host" : "srv",
Which has a conflict with the signal mapping. host
has to be an object with inner objects/attributes as outlined here:
Once you fix that and re-index your data it should work. If it doesn't we can look at your mapping and data again. When the soon to be released 7.10.0 ships you will be able to see these error messages so getting these problems fixed sooner will be easier.
You can see the signals mapping here if it helps to find conflicts:
We mostly use ECS tooling fwiw:
and/or look at their generated outputs:
https://github.com/elastic/ecs/blob/master/generated/elasticsearch/7/template.json#L1071
to try and stay compliant and update along the way.
Hi @Frank_Hassanabad thank you for your reply.
Yesterday I realized about the conflict. I found another one, the service
field which was a keyword and should be an object
.
I did fix the mapping for these two fields, re index the data and I think that now I do not have conflict with ECS. I'll take a look to the tool, I didn't know it. Thank you!
Normally I'm very aware of ECS, I've been working with several PRs for new fields and discussions of how to model some situations.
The host
field is automatically created by logstash and I did not realize that was a keyword, not an object. The service field was my mistake.
I understand now the difference between the two kind of rules. Thanks!
I still have some problem, with my data. The custom query is failing with this error, a type error.
{"type":"log","@timestamp":"2020-10-27T09:47:24Z","tags":["error","plugins","securitySolution","plugins","securitySolution"],"pid":6587,"message":"Bulk Indexing of signals failed. Check logs for further details. name: \"Rogue AP Detection\" id: \"bf09eb47-dc8b-4951-8781-9db2719db1c2\" rule id: \"2b451c6f-54b7-489b-a3c4-27861a8ed653\" signals index: \".siem-signals-igor\""}
I'll continue investigating and I'll let you know if I'm able to find the error
If I can be of any help testing or something, please let me know
Thank you!
Regards
Anna
I did some review/tests and here the results:
-
I use the ECS tool in order to generate the appropriate mappings for my fortinet fields.
I've detected several type mismatches for : group, agent, url, interface it were defined as keyword and according to ECS they must be an object. I did fix the index template and at ingestion time I converted those fields into object at ingestion time. -
After fixing the mapping I defined this two rules over the same index. Basically are the same rules but querying for a different value. One is failing, the other one is not.
I really do not understand why this situation is happening, since are querying the same index
- both rules uses the same index
- both rules searches for an especific event. event.code: yyyyy
- One rule success, the other one fails
NOT OK Rule
Kibana Log
{"type":"log","@timestamp":"2020-10-28T17:32:26Z","tags":["error","plugins","securitySolution","plugins","securitySolution"],"pid":17968,"message":"[-] search_after and bulk threw an error TypeError: Cannot read property 'some' of undefined name: \"Forti-0104043571\" id: \"61ad08aa-9e18-49d7-8191-6d7a0eb9d875\" rule id: \"7deca60b-c811-4cfa-b25e-a56df5a1a2f8\" signals index: \".siem-signals-igor\""}
{"type":"log","@timestamp":"2020-10-28T17:32:26Z","tags":["error","plugins","securitySolution","plugins","securitySolution"],"pid":17968,"message":"Bulk Indexing of signals failed. Check logs for further details. name: \"Forti-0104043571\" id: \"61ad08aa-9e18-49d7-8191-6d7a0eb9d875\" rule id: \"7deca60b-c811-4cfa-b25e-a56df5a1a2f8\" signals index: \".siem-signals-igor\""}
OK Rule
Here the rule exports and examples of each type of event
{"author":[],"actions":[],"created_at":"2020-10-28T12:02:59.347Z","updated_at":"2020-10-28T12:03:00.120Z","created_by":"acristal","description":"Detects 0104043530 logid","enabled":true,"false_positives":[],"filters":[],"from":"now-72120s","id":"e99d902c-a266-47f9-89cf-acc949317201","immutable":false,"index":["forti-logs*"],"interval":"2m","rule_id":"21d6a191-17cb-4877-83ca-d2ab3329a035","language":"kuery","license":"","output_index":".siem-signals-igor","max_signals":100,"risk_score":76,"risk_score_mapping":[],"name":"Forti_0104043530","query":"event.code: \"0104043530\" ","references":[],"meta":{"from":"20h","kibana_siem_app_url":"https://192.168.1.93:5601/s/igor/app/security"},"severity":"high","severity_mapping":[],"updated_by":"acristal","tags":[],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":1,"exceptions_list":[]}
{"exported_count":1,"missing_rules":[],"missing_rules_count":0}
{"author":[],"actions":[],"created_at":"2020-10-28T12:17:54.748Z","updated_at":"2020-10-28T12:17:55.540Z","created_by":"acristal","description":"Rule for forti logid 0104043571","enabled":true,"false_positives":[],"filters":[],"from":"now-72120s","id":"61ad08aa-9e18-49d7-8191-6d7a0eb9d875","immutable":false,"index":["forti-logs*"],"interval":"2m","rule_id":"7deca60b-c811-4cfa-b25e-a56df5a1a2f8","language":"kuery","license":"","output_index":".siem-signals-igor","max_signals":100,"risk_score":76,"risk_score_mapping":[],"name":"Forti-0104043571","query":"event.code : \"0104043571\" ","references":[],"meta":{"from":"20h","kibana_siem_app_url":"https://192.168.1.93:5601/s/igor/app/security"},"severity":"high","severity_mapping":[],"updated_by":"acristal","tags":[],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":1,"exceptions_list":[]}
{"exported_count":1,"missing_rules":[],"missing_rules_count":0}
Thank You!
Regards
Anna
Both of those rules are going back by 20 hours running each time for 2 minutes? If you decreased that amount of time for each rule do you still see issues? Normally we don't set the look back time by that amount as it's resource intensive and not really needed.
The additional look back time is mostly used if people have some type of clock drift or their events which already have @timestamps
are arriving a bit late.
Hi @Frank_Hassanabad,
The loopback time of 20hs was because I was ingesting data with 20 hs of delay (from "now") .
Now I'm ingesting in real time and I've modified to the default: every 5 minutes and 1 minute loopback and the result is the same.
Below the evidence
Thank you!
Regards
Anna
{"author":[],"actions":[],"created_at":"2020-10-28T12:17:54.748Z","updated_at":"2020-10-29T17:46:49.770Z","created_by":"acristal","description":"Rule for forti logid 0104043571","enabled":true,"false_positives":[],"filters":[],"from":"now-360s","id":"61ad08aa-9e18-49d7-8191-6d7a0eb9d875","immutable":false,"index":["forti-logs*"],"interval":"5m","rule_id":"7deca60b-c811-4cfa-b25e-a56df5a1a2f8","language":"kuery","license":"","output_index":".siem-signals-igor","max_signals":100,"risk_score":76,"risk_score_mapping":[],"name":"Forti-0104043571","query":"event.code : \"0104043571\" ","references":[],"meta":{"from":"1m","kibana_siem_app_url":"https://192.168.1.93:5601/s/igor/app/security"},"severity":"high","severity_mapping":[],"updated_by":"acristal","tags":[],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":2,"exceptions_list":[]}
{"exported_count":1,"missing_rules":[],"missing_rules_count":0}
{"author":[],"actions":[],"created_at":"2020-10-28T12:02:59.347Z","updated_at":"2020-10-29T17:48:11.942Z","created_by":"acristal","description":"Detects 0104043530 logid","enabled":true,"false_positives":[],"filters":[],"from":"now-360s","id":"e99d902c-a266-47f9-89cf-acc949317201","immutable":false,"index":["forti-logs*"],"interval":"5m","rule_id":"21d6a191-17cb-4877-83ca-d2ab3329a035","language":"kuery","license":"","output_index":".siem-signals-igor","max_signals":100,"risk_score":76,"risk_score_mapping":[],"name":"Forti_0104043530","query":"event.code: \"0104043530\" ","references":[],"meta":{"from":"1m","kibana_siem_app_url":"https://192.168.1.93:5601/s/igor/app/security"},"severity":"high","severity_mapping":[],"updated_by":"acristal","tags":[],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":2,"exceptions_list":[]}
{"exported_count":1,"missing_rules":[],"missing_rules_count":0}
Hi @Frank_Hassanabad,
Here the event samples and the mapping
Please let me know if I can help with something
Regards
Anna
Query
GET forti-logs-2020.10.28-000001/_search
{
"query": {
"bool": {
"filter": [
{"term": {
"event.code": "0104043571"
}}
]
}
}
}
Thank you!
I can reproduce this bug now. It looks like your data set sometimes as the wording "signal" in it which is causing the conflict when writing out our signal.
I have created a ticket for this you can watch:
I think I might be able to fix it by doing a "move" of the data from "signal" -> "original_signal" to avoid the clashing on the write out but don't know how hard/easy that will be. In the meantime if you want this to work for 7.9.2 you could change that name and do a reindex of your data.
Hi Frank,
It is probably the root cause. When I compared the events (which fields were different) I didn't realize about the name "signal" in my fields.
In my particular case of Fortinet's wireless controllers I detect 7 events containing that field (It could be very common when ingesting logs from wireless or Access point).
There are two events that are particularly important from the security point of view that detects the presence of rogue access points in a wireless infrastructure.
In the meanwhile I'll rename that field and I'll let you know the results
Thank you very much!
Regards
Anna
HI @Frank_Hassanabad,
I temporary removed the field signal and now is working
As soon as the bug is solved I'll remove the removal of signal field and I'll test it again.
Thank you very much for your support and your patience
Regards
Anna
Ahhh good good news. Thanks for letting me know about how often that field is going to show up in data sets, this is something very important to us to get a fix for if it's going to be more common than previously thought of being in customer data sets.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.