Feedback regarding Synthetics Tests

Dear all,

We are currently evaluating the Synthetic Monitoring of Elastic with 2 applications (1 Java with JSP, 1 Angular) and although we did not create huge tests yet, I would like to give our opinion about the current status:
Overall, I like the experience of running and monitoring the tests already while the recorder is still in an early stage and misses a few features that would make it really useful. The 2 things I like the most are the project monitors to support version control and the detailed overview over what is happening in each step (like the screenshots and the detailed network requests).

With this being said, here is a list of features that would greatly improve the experience in my opinion:

Prio 1
My biggest problem are the currently required privileges for creating Synthetic Monitoring Tests on a private location as I need to grant all privileges on fleet and integrations to all spaces when a user should be able to push tests for a SINGLE space on a SINGLE private location. With this, he would be able to modify private locations, edit integrations other than synthetic monitoring and editing all other fleet agents and he also sees all spaces instead of ONE space he has access to. So basically, I would either grant them full access to fleet or I would have to manage the tests of all users for them.

While the recorder is a practical tool to get started with synthetics tests, after the first creation it is pretty much useless. As we cannot import existing scripts, you cannot use the tool to create/update/delete actions or steps and we cannot use the test feature to see if the updated script is working correctly. Currently, the only way is to change the files manually (which some testers may not be able to because of missing Javascript knowledge) and to test the created script directly in Elastic (which may cause alerts for failed steps). Therefore, importing an existing script file should be an important step forward.

Prio 2
As I am already using Elastic APM I configured the APM service name for my tests, but I was unable to find a link within the synthetic test details to show the corresponding APM traces. It would be great to have a direct link to APM (and of course other features like logs and metrics).

When using custom certificates with our own root CA, synthetic tests could use a bit of improvement. You can either disable the certificate validation (which is something nobody should ever do), or you can create your custom image based on the official Elastic agent image with your own certificates included. We currently use option 2 but it would be cool not having to maintain our own version of the image.

Prio 3
Another improvement would be to check the certificates on servers checked with synthetic tests the same way as it is already done with HTTP single page tests. Of course, this can currently be solved with a dummy HTTP test against the server but it would be great to support it nevertheless.

I don't know if this is possible but I would like to be able to run the synthetic tests in a CI pipeline: When using project monitors, the tester or developer could push changes to the tests into GIT, the CI would run the tests and fail if any of the tests fail. This way, only correct tests would be merged into the main branch. Maybe this could be possible with a special command option to the elastic agent image?

When running the Test with the synthetics recorder, the Browser window will be closed directly after completion. It would be great if the browser window would stay open after completion: For failed tests, we could check what happened and fix the test before a restart. For successfull tests, we could then add additional actions/steps.

It would be cool if the recorder had the option to reorder actions within a step.

Currently, the selector is already shown when hovering over an element in the browser. When adding an assertion, we need to manually type this selector into the recorder. It would be great if we could either copy the selector for the current element to paste it into the recorder or if we could select an element from within the recorder (like the element selector feature in the browser DevTools).

It would be a good addition to allow access to settings like timeouts and so on in the recorder (which can be configured using the playwright options in Kibana).

It would be great to have parameter support in the recorder. This way, we could already create the parameters there and prevent hardcoded entries from being pushed to Elastic and Git. This is especially useful for not leaking passwords.

When an application uses OIDC (e.g. with Keycloak), the synthetics recorder detects the redirect as a navigation and adds this to the test. Although it is easy to delete the step, maybe it is possible to detect such redirects and ignoring them? Maybe it even makes sense to ignore all redirects as they are sent by the server and we expect the server to send them for all runs?

Best regards
Wolfram

1 Like

Hi Wolfram,

Thank you so much for going to this effort to collate and feed these back to us - it really is so useful and appreciated by the team.

To respond about a few specifics (we will reply further shortly about more of your points).

You mention that when making changes to a script, they cannot be tested in the Recorder, or users have to risk running them in Elastic which could cause an alert if it fails.

It is possible to run the monitors outside of Elastic Observability, such as locally (as you’re testing changes), or via CI (which you also request).

This is done by using npx @elastic/synthetics (as documented here). We also have demos of how to automate these, for example with GitHub Actions, with some examples here.

If running locally you can also run with the browser being shown (non headless) and keep the browser open if there are errors, for example with

npx @elastic/synthetics example.journey.ts --pause-on-error --playwright-options '{"headless": false}'

We will be in touch about your other points.

Thanks again,
Paul.

1 Like

Hello Paul,

Thank you so much for your response.

I really missed that point, sorry :frowning: Maybe you could discuss if it is worth making this more visible? Because I really checked the docs and did not find it:

  • I checked the menu but as I knew how to write the tests I didn't check Write a synthetics test for information how to run it. Maybe you could move it to a separate menu link?
  • I checked the Synthetics CLI docs here but I only found the commands init, push, locations and frankly, I missed the info that running the tests is not a separate command. I expected a command like run here...

Best regards
Wolfram

Hi Wolfram. Thank you for this incredibly detailed feedback. I genuinely appreciate the time you took to collate it and write it all out.

Regarding your two top priorities, we're looking at how to simplify the permission model works. As you eluded too, there's a set of conflicting permissions. We have some ideas on how we can improve it work similarly to what you've described and I'll let you know when we've done a bit more there.

In terms of the recorder, we have been tracking a number of enhancements that align with your points. Here's a link to a number of tracking issues where you can follow the progress:

Regarding your secondary priorities, we have a significant chunk of work prioritised to seamlessly link APM and Synthetics is a really great way. I'll be able to share more on that in the coming releases after 8.9. When it comes to custom certs, we have a tracking issue for this here Support custom cert store · Issue #170 · elastic/synthetics · GitHub

Finally, about the certificate testing, this is something we wish could have made it into our 1.0 release but didn't make it. Phase one is a shift of the current ping-based capabilities into the new Synthetics UI. After that, our desire is to then enable cert checking for browser based monitors. It's a slightly bigger UX challenge (are we testing for the root object, other network requests, third parties?) so I'll loop you in with our UX researcher as we progress with that. you can follow this work on these two tracking issues: https://github.com/elastic/beats/issues/22326 and [Heartbeat] Check certificates for Synthetic monitors (all hosts) · Issue #22327 · elastic/beats · GitHub

Again, I really want to thank you for this feedback. This isn't just lip service. We really do want to make the product even better and this type of stuff is gold dust to the team. Please don't hesitate to keep it coming.

1 Like

Hello @Wolfram_Haussig, we have merged a PR simplifying permission related to private locations monitors in synthetics app. Basically you will only need fleet permission while creating new agent policies. Once that is done, everything else like creating locations against policies or managing monitors in private locations can be done with Synthetics app permissions.

This will go in next release which is 8.9.

1 Like

Dear team,

Thank you so much for the links - I already subscribed the Github issues to be informed about any news and I am happy that our main problem regarding privileges might be solved in 8.9 :+1:

We are progressing in regards to our tests and we found a few other (but only minor) issues. Although some of them may be related to playwright and not the Elastic Synthetics package itself, I will show them here for reference:

  1. Different behaviour between headless and interactive mode
    I am a german user and when running the Synthetics Recorder, the application is in german. This is also true when running synthetics with --playwright-options '{"headless": false}', but when running the tests in headless mode, the monitor suddenly fails. After digging around, I found that in headless mode, the application suddenly shows in english. I solved this by setting the locale in the playwrightOptions, but I think it would make sense to have the same behaviour in all environments.

  2. Minor code generation issue for downloads
    When I use the recorder and click on a download, it generates code like this:

const downloadPromise = page.waitForEvent('download');
await page.getByRole('button', { name: 'Excel Export' }).click();
download = await downloadPromise;

The error at runtime is:

 stack: |-
   ReferenceError: download is not defined
       at Step.callback (D:\Dev\WORKSPACE_NEU\tcmore-test\journeys\example.journey.ts:16:5)
       at Runner.runStep (D:\Dev\WORKSPACE_NEU\tcmore->test\node_modules\@elastic\synthetics\src\core\runner.ts:212:7)
       at Runner.runSteps (D:\Dev\WORKSPACE_NEU\tcmore->test\node_modules\@elastic\synthetics\src\core\runner.ts:262:16)
       at Runner.runJourney (D:\Dev\WORKSPACE_NEU\tcmore->test\node_modules\@elastic\synthetics\src\core\runner.ts:352:27)
       at Runner.run (D:\Dev\WORKSPACE_NEU\tcmore->test\node_modules\@elastic\synthetics\src\core\runner.ts:445:11)
       at Command.<anonymous> (D:\Dev\WORKSPACE_NEU\tcmore->test\node_modules\@elastic\synthetics\src\cli.ts:137:23)

To fix this, I had to add the var keyword:

const downloadPromise = page.waitForEvent('download');
await page.getByRole('button', { name: 'Excel Export' }).click();
var download = await downloadPromise;
  1. Use import instead of require for project monitors
    Currently, the Recorder generates the following statement for project monitors:
const { journey, step, expect } = require('@elastic/synthetics');

This works fine and as long as you have only one journey-file, this is also not a problem in other tools. But when having multiple journey files and using tools like VS Code you get the following error:

Cannot redeclare block-scoped variable 'journey'. ts(2451)

This can be fixed by rewriting it to:

import { journey, step } from "@elastic/synthetics";

Is it possible that the recorder generates an import-statement instead of require?

  1. Wrong code generation on navigation
    A few times - but not always - the Recorder generated wrong code when submitting a form. I was not able to reproduce in which situations this happens.
    Instead of:
journey('Recorded journey', async ({ page, context }) => {
  step('Go to application', async () => {
    await page.goto('https://my-host/prod/my-app/login.html');
    await page.locator('#inputUname').click();
    await page.locator('#inputUname').fill('user');
    await page.locator('#inputUname').press('Tab');
    await page.locator('#inputPwd').fill('S3cr3t!');
    await page.locator('#inputPwd').press('Enter');
  });
});

The Recorder sometimes generates:

journey('Recorded journey', async ({ page, context }) => {
  step('Go to application', async () => {
    await page.goto('https://my-host/prod/my-app/login.html');
    await page.locator('#inputUname').click();
    await page.locator('#inputUname').fill('user');
    await page.locator('#inputUname').press('Tab');
    await page.locator('#inputPwd').fill('S3cr3t!');
    await page.goto('https://my-host/prod/my-app/index.html');
  });
});

So, instead of submitting the form to login, it only fills the form and then tries to visit the application homepage which requires authentication.

Best regards
Wolfram

PS - cleaning up objects after Test

The following is neither a bug nor a feature request, but a bit of info for others (or maybe someone has a better idea?):
Not all applications are reporting tools where the Monitor only does readonly operations, there are also Applications where we may want to monitor the process of creating and/or modifying objects. In this usecase, the synthetic monitors may fail when the previous run did not correctly cleanup. To make the cleanup more resistant, I moved them out of the normal journey steps and put them in a cleanup step after the journey completed. This is my little cleanup.helpers.ts:

import { after } from "@elastic/synthetics";

/**
 * Cleanup Entry
 */
class CleanupEntry {
    /**
     * constructor
     * @param name the name of the object to cleanup
     * @param callback the function containing the cleanup logic
     */
    constructor(public name: string, public callback: () => Promise<void>) {}
  }
  
  /**
   * Contains all cleanup actions to process
   */
  var toClean: CleanupEntry[] = [];
  
  /**
   * add a cleanup action to run after the journey is complete
   * @param name the name of the object to cleanup
   * @param asyncCleanupFunction the function containing the cleanup logic
   */
  export function doCleanup(
    name: string,
    asyncCleanupFunction: () => Promise<void>
  ): void {
    toClean.unshift(new CleanupEntry(name, asyncCleanupFunction));
  }
  /**
   * register the cleanup process to run after the journey is complete
   * @param page the page object from the journey
   */
  export function registerCleanup(page): void {
    after(async ({ params }) => {
      for (let i = 0; i < toClean.length; i++) {
        try {
          console.log("Cleaning: " + toClean[i].name);
          await toClean[i].callback();
        } catch (e) {
          console.log(e);
        }
      }
    });
  }

This can now be used like:

journey("Recorded journey", async ({ page, context }) => {
  // Only relevant for the push command to create
  // monitors in Kibana
  monitor.use({
    id: "example-monitor",
    schedule: 10,
  });
  step("Login", async () => {
    await page.goto(
      "https://myhost/myapp/login.html"
    );
    await page.locator("#inputUname").click();
    await page.locator("#inputUname").fill("user");
    await page.locator("#inputUname").press("Tab");
    await page.locator("#inputPwd").fill("s3cr3t!");
    await page.locator("#inputPwd").press("Enter");
    doCleanup("Logout", async () => {
      await page
        .getByRole("link", { name: "Logout" })
        .click();
    });
  });
  step("Add User", async () => {
    //... create user code
    doCleanup("Delete User", async () => {
		//... delete user code
    });
  });
  step("Add Group", async () => {
    //... create group code
    doCleanup("Delete Group", async () => {
		//... delete group code
    });
  });
  registerCleanup(page);
});

When a step executes successfully, the doCleanup will register the cleanup code to be run at the end of the journey in reverse order. This allows to be the latter objects to be deleted before possible dependencies are deleted.
When a single cleanup callback fails, the others are still tried to cleanup as much as possible and the error is logged to console.
When a step in the journey fails, the according cleanup-job will not run.

Unfortunately, throwing an exception in the after-call does not fail the process because the journey steps themself were successfull. This means, that currently, a failed cleanup process will only be shown on the next run because the creation of the object will fail.

Thanks for all the feedback, as a partial response I've opened Prompt for locale on synthetics init · Issue #786 · elastic/synthetics · GitHub. It's sort of unreliable how headful vs. headless works WRT locale, because it might work locally on your laptop to dev / test, but on a prod machine you may have a different locale. What do you think of prompting users to set locale explicitly at the project level?

1 Like

I think this is the best solution. I already commented the issue too.

I've created Recorder enhancements · Issue #432 · elastic/synthetics-recorder · GitHub to track some of the recorder enhancements referenced

Hi all,

I hope that you are not annoyed yet about me posting my findings here:

  1. Push does not correctly detect all changes
    I have a project where I split the logic of the tests over multiple files. This looks something like this:
    application.journey.ts:
journey("TC&More", async ({ page, params }) => {
   testFunction1(page);
   ...
}

function1.part.ts:

export const testFunction1 = (page: Page) => {
  step("Test Function1", async () => {
    ...
  }
  ...
}

This works fine. However, a change to the function1.part.ts is not detected when running npx @elastic/synthetics push. I need to change the journey file to notify the change and for the update to be pushed.
I would expect something along the lines:

  • package the journey with all dependencies
  • create a checksum over the package
  • if the checksum changed, there was a modification and the journey needs to be pushed
  1. Screenshots in test mode
    As we know that there are subtle differences between headless and non-headless modes, I used --screenshots on to enable the screenshots. Unfortunately, they are stored as JSON and not as png or similar. Will there be an option to convert them to a format that is easier viewable? Getting the base64 value, decoding and viewing it is currently not a practical way.

  2. Popup handling
    I have a page with a popup which I open using code similar to this:

const popupPromise = page.waitForEvent("popup");
await page.getByText("Link").click();
var popupPage = await popupPromise;

This works fine, but it seems to confuse the screenshots shown in Kibana:
When I use await identCheckPage.close(); as the last command within the test step, the final screenshot does (obviously) not show the popup because it is already closed.
When I use after(async ({ params }) => { await identCheckPage.close(); });, the final screenshot shows the popup as expected but all following steps do too. Any idea how to solve that?

Best regards
Wolfram

Hi @Wolfram_Haussig thank you for the feedback, we really appreciate it. Any kind of feedback or suggestions are always welcome.

  1. I was able to reproduce it and created an issue covering it.
    Synthetics `push` command doesn't detect changes in imported files · Issue #802 · elastic/synthetics · GitHub
  2. The team had a short discussion over it and is likely to discuss it further and come up with some facility in the Synthetics CLI to be able to retrieve png images. If we were able to triage it sooner and create an issue, we'll link it here.
  3. You can close the popup in a subsequent step, e.g. if Step 01 is where you want to capture the popup's screenshot, you can let Step 01 end with popup opened, and in Step 02, close the popup as the first thing. You can use journey-wide variables to achieve that (also works with inline journeys defined in UIs).
    let identCheckPage;
    step('Deal with popup', async () => {
        await page.getByText("Link").click();
        identCheckPage = await page.waitForEvent("popup");
        // Do something with identCheckPage but do not close it;
    })
    
    step('After popup', async () => {
        // Close popup as it's not needed here
        await identCheckPage.close();
        
        // Actions related to current step
    })
    
    Or the following is a different example that may also help:
    // The first step opens a new tab whose screenshot we want to capture in the first step
    step('Home -> Meta Pay', async () => {
        await page.goto('https://facebook.com'); // main page
    
        // Accept cookie banner if shown
        try {
            await page.click('[data-testid=cookie-policy-manage-dialog-accept-button]', { timeout: 2 * 1000 });
        } catch (e) { /*Empty*/ }
    
        // This will open a new tab and screenshot will include the new tab 
        await page.click('[href="https://pay.facebook.com/"]');
    });
    
    // The second step operates on the opened new tab and opens yet another page (3rd page). In this step, we want to capture the screenshot of the 3rd page.
    step('Meta Pay -> Help', async () => {
        // Get a reference to new tab (will only work if the previous step has opened a new page)
        const newTabPage = await page.context().waitForEvent('page');
    
        await newTabPage.click('text=Help');
        await newTabPage.waitForSelector('[href="/support/"]')
    });
    
    If it still doesn't solve your problem, please share a bit of script and we will try to help.

Hope this helps and feel free to let us now if we can help further.

Grüße

2 Likes

Hello @abdulz,

Thank you so much for your response, this solved my problem! I somehow didn't think of closing it in an extra step.

Best regards
Wolfram

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.