All Rules are showing Failed

Changing the index patterns in the rule to a wide "logs-*" gives me this error:

Could it be the reason my endpoint is not working?
I don't get any malware found notifications and/or rule hits.

Hosts are online in the administration tab and the config is set to prevent:

Happens only at 7.11.1 On 7.10.2 everything was working out of the box.

Hey Frank, is there any documentation on how to set up the Indexes for the rules? Still haven't been able to properly test the elastic agent yet with the rules due to this. Apologies still quite new to the ELK stack and elastic products, but can't seem to find any solution on line on how to create edit or update the indices for the Security Rules

I have the same issue in elastic cloud.

Is the problem here not that the definition is configured to point at endgame*,

but in my 7.11 install, it seems endpoint logs are put under

This appears to be the index

So the question I have is what changes are required to get the endpoint detection rules pointing to the correct place? I can't seem to edit a rule definition, the tabs are greyed out.

Hi @emmett.carey !

If you are talking about the pre-packaged detection rules we ship with the security solution then you are correct, you cannot modify those. However, if you duplicate those prebuilt rules you can edit them however you please!

From another thread:

GET logs-endpoint.alerts-/_mapping
GET endgame-

On 7.11.1 you should see an issue. Reponses = { }

It appears all the mappings are missing after upgrades from 7.10 and 7.10.2 upgrade to 7.11.1. At least 2 dev clusters for me have been this way. One dummy cluster with a single device talking to it another with 100~ devices talking to it.

Either the SIEM rules need to be updated to the "if new" new location which would be from upstream or every user that currently has 7.11.1 will have to do a manual modification to all Endpoint rules. Or the index needs to be recreated. Which to be honest isn't something some of us newer folks want to do...

Hi Devin,

Thanks, but will this allow me to the create indices for logs? All I am trying to do is to get the out-of-the-box siem solution working, which at the moment seems I need to create numerous index as per the fail towards the top of the post and Franks comments.

Seems from @PublicName comment below this worked fine and as expected in 7.10 and 7.10.2 and failed when upgrading to 7.11, which I am currently on

A solution I am looking for at the moment is

A) an easy way to recreate the indexes and some documentation on how to do so, so that the SIEM rules work out of the box

B) an update (or even roll back to 7.10.2) where the indexes are created when starting up the SIEM tool and configured with Fleet and Agent

I am happy to either rollback or upgrade (if an upgrade is coming in the very near future) if it's the most convenient.
However I would prefer to keep using the latest version if possible, but I am concerned, if adding indexes manually that It might break when a newer version of Elastic Stack is rolled out and would be wasted effort on my part if those issues between 7.10.2 and 7.11 were fixed in the latest update

Seems like the behaviour of the Detection Rules for Elastic Stack version 7.10.2 seems to work as expeceted where by when rules are added, they are 'succeeding' as described by @PublicName

My question is now, is that index error actually the case in this version of elastic stack (7.10.2) and just not notifying the user? as in will these rules always succeed as the index is not created and logs will never be captured thus rules will never be triggered?

So it seems in 7.11 the indices/index are broken and the rules point to something that doesn't exist. As pe the comment from @emmett.carey whats the best way to resolve it. Surely it can't be duplicating all the rules and re-pointing to the correct index?

@PublicName had an issue with an upgrade with Elastic Fleet which is not necessarily related to the apparent failures reported in this thread. Also I'm not sure the index patterns they provided are accurate. The thread they reference in fleet will figure that out and is a separate issue that is not related to the prepackaged rules.

I just want to set some baseline information for people who may come by this thread. The pre-packaged rules do not set up the indices in Elasticsearch. Our prepackaged rules are defined to search index patterns that are set up by either Beats or the Elastic Agent. Sometimes customers do not have the Beats or the correct integrations with Elastic Agent set up and as a result, some of the index patterns that our prepackaged rules search against do not exist.

Prior to 7.11 if the prepackaged rules were provided an index pattern that did not exist on a customer's cluster because the customer does not have an Elastic Agent integration set up or are not running Beats, the rules would run "successfully" despite these index patterns not existing on the system because they were never set up.

For instance if I run the following in dev tools

GET thisdoesnotexist*/_search
  "query": {
    "match_all": {}

I get the following response

  "took" : 0,
  "timed_out" : false,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "skipped" : 0,
    "failed" : 0
  "hits" : {
    "total" : {
      "value" : 0,
      "relation" : "eq"
    "max_score" : 0.0,
    "hits" : [ ]

With 7.11 we introduced new error messaging for the rules to check that these index patterns do have concrete backing indices behind them. We also added a check to ensure that the user who defined these rules has read privileges to the provided index patterns. The failure you are seeing is something that, assuming no issues with fleet deleting your indices, was probably happening prior to 7.11 but was not showing as an error because there technically was no error in the response. When you query an index pattern that does not exist within Elasticsearch the result is an empty hit and not an error (like in the above example) so we were never going to report an error in that case.

With these checks we are now providing error messaging to help guide customers to make sure the backing requirements for the rules to truly be successful are set up properly.

Hope this summary helps!

I believe that user is having issues with an upgrade from fleet which is not related to the rules. Please see my response here: All Rules are showing Failed - #19 by Devin_Hurley

As of last night this also happened on a fresh install of 7.11.1 on a new install of CentOS 8. All packages pulled from the repo.

Hi Devin,

Thanks for clearing the air a bit on the issue. In regards to the elastic agent integration

When I had 7.11 set up with Elastic Agent through Fleet I was getting errors across the board with out-of-box rules. In this case is the recommended path to add integrations to the Fleet Policy to satisfy these checks?

For example, for the following error:

The following index patterns did not match any indices: ["auditbeat-*","*"]

I would need to add integrations that had indexes for auditbeat-*,* to policy to ensure that logs for that rule would be captured correct?

I would need to add integrations that had indexes for auditbeat-, to policy to ensure that logs for that rule would be captured correct?

Essentially yes but there is a condition that is special specifically to the* index pattern where that index is not set up until the endpoint integration ships data (specifically, the endpoint detects something) but as long as auditbeat-* or* match some index in Elasticsearch then you won't see this specific message for that rule.

For those that are have the same issue on existing clusters that have existing integrations and have gone over the long thread. Here's a method to force success as per @Devin_Hurley recommendations. Not sure how to trigger the Endgame-* to be created without getting into higher risk malware situation.

  1. In DEV tools in Kibana " GET logs-endpoint.alerts-*/_mapping " If it's empty try below"
  2. Create sandbox VM. "Windows 10" No need to patch.
  3. Load new policy with Endpoint/System/Windows. In systems integration disable systems load "not supported on windows". Set the Endpoint to detect only! Make sure to register Endpoint as the AV for windows or defender will stop you.
  4. Load Endpoint on the device.
  5. Download Mimikatz from GitHub.
  6. Execute Minikatz. You do not need to do anything past run it. This will force load the index and at least allow Elastic Security to stop showing as failed.

One thing to note is this is not a fix another thing that broken. Analyze event no longer works "Error loading data". Any event's you had like email or webhook no longer function.

Recreate index it get's deleted on upgrades or wait for malware. Someone will do it sooner or later.
Avoid moving to 7.11 or 7.11.1 until it's fixed upstream if you use Endpoint. The devs are awesome it will get fixed soon.

1 Like

I set up as you sait, with endpoint, system and windows integration only to detect.
Unfortunately Elastic Security in version 7.11.1 (it was working and notifying a malware on host and not letting eicar execute) is somehow not woking anymore.

Here's a recorded gif. I can execute mimikatz and eicar and nothing's happening : /


The analyzer event no longer works is known and there is a fix coming incredibly soon for it:

Workaround for it in the meantime is:

  1. In the Detections page, find the alert of interest, click on "Investigate in Timeline".
  2. Inside of Timeline, find the alert event and click on "Analyze Event".
  3. User should be able to investigate

Sometimes the alerts can take a while before they show up from the endpoints and we are working to reduce the time on that but you should see the alerts from the endpoints.

1 Like

@Frank_Hassanabad Glad to hear it! Kind of like that analyze feature while not the quickest to see what the file name was the chain of events is wonderful.

@Edvin22 Disable smart screen and app and browser control settings. May have to do it by GPO on that build. That's Windows Defender kicking in preventing Endpoint from ever seeing the event on the device. Which would prevent Elastic/Kibana from ever seeing the event.

I'm using Win 10 Enterprise LTSC builds and don't have any of the standard pro's running in my environment. It's actually best to have Defender running along side Endpoint IMHO just due to the fact that it's a file level scan tool where Endpoint is a process based. Just make sure to setup the limits for Defender and you get the best of both worlds until testing like this.

1 Like

Thank you for your reply I will take in mind the connection between file and process level scan.
Although I tried disabling Defender and SmartScreen and still no reaction from Elastic EDR to Eicar file. I'm following the same steps I did in previous 7.10 release but in 7.11 it's just not working anymore. I'm using default settings.

Just from what I've seen Eicar does not get detected by Endpoint. It's a dummy "file" with no payload so it's not surprising to have it skipped. Try minikatz and see if it triggers at least the Endpoint Security Event. It will not trigger anything to recreate the endgame-* index unless you actually run it. Minikatz will not cause harm just by running it directly at least it's current version. It's a tool and the payload is only used after you have selected a target.

For anyone else that run's into this part. Here's an example that shows the index is still present days after the event has been cleared and is the expected results. They should not drop to Failed when nothing is triggered.

Endpoint Security I trigged with Minikatz and now the index was recreated and the monitor shows it good. It does NOT mean that an event was triggered in the last few minutes.
Malware -Prevented- Endpoint Security is endgame-* index which is not created until something actually would trigger it. I for one will live with Failed until we can figure out how to create the index again without actually running malware in my network. So far I haven't figured out how to get the endpoint agent to push the index and it's mostly from lack of trying honestly.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.