Cross-browser Testing Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/tag/cross-browser-testing/ Applitools delivers full end-to-end test automation with AI infused at every step. Thu, 19 Mar 2026 20:19:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies https://app14743.cloudwayssites.com/blog/playwright-visual-testing-strategy/ Thu, 19 Mar 2026 20:19:13 +0000 https://app14743.cloudwayssites.com/?p=62370 Visual testing in Playwright often forces teams to choose between strict failures, snapshot maintenance, and CI pipeline complexity. This article explores how a single configuration flag introduces three different strategies for handling visual differences and improving the Playwright developer experience.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Hello everyone! I’m Noam, an SDK developer on the Applitools JS-SDKs team. While my day-to-day focus is on core engineering, I work closely with our field teams and occasionally join technical deep-dive sessions with customers.

In these conversations, we frequently encounter questions about performance and the engineering philosophy behind our integration. Specifically, there is often curiosity about how to make visual testing feel more “Playwright-native” and natural to developers.

In this post, I’ll share the design logic behind these architectural choices so you can apply these patterns in your own CI pipelines in a way that fits your organization’s needs.

Adding unresolved to Playwright

Integrating visual regression testing into Playwright requires combining two different status models: Playwright’s binary Pass/Fail and the visual testing concept of unresolved.

In visual testing, instead of having two (passed and failed) states, there’s an additional third state: unresolved. This state indicates a difference was detected, but a human decision is required to determine if it is a bug or a valid change that should be approved as a new baseline.

​Playwright doesn’t support this third state out of the box. Visual test maintenance using Playwright’s native toHaveScreenshot API forces the developer into a cumbersome cycle requiring three separate test executions:

  1. First, the developer needs to run to see the failure.
  2. Then, they need to run with the --update-snapshots flag to create new baseline images.
  3. Then, most developers would run again to validate that everything works with the updated baseline as expected—which isn’t always the case, because the Playwright native comparison method (pixelmatch) tends to be very flaky, unlike Visual AI.

​After this local cycle, the developer must commit the new baseline images to the repository—bloating the git history—and wait for a new CI execution to provide final feedback. For dev-centered organizations that focus on feedback loop velocity, this workflow is… suboptimal. Personally, I believe that’s one of the reasons visual testing isn’t as popular as it should be among Playwright users.

​When we engineered the Applitools fixture, one of our goals was to support this Unresolved state natively, without disrupting Playwright’s core lifecycle—specifically its Worker Processes and Retry mechanisms.

The solution rests on two key engineering decisions: moving rendering to the background (async architecture) and giving developers control over the exit signal and performance tradeoffs (failTestsOnDiff).

We don’t block test execution when Applitools is rendering

The core value of visual testing lies in AI-based comparison to eliminate false positives and multi-platform rendering.

Architecturally, these processes are cloud-native services.

  • AI-as-a-Service: Just like massive LLMs or other generative models, the Visual AI engine runs on specialized cloud infrastructure optimized for heavy inference. It cannot simply be “installed” on a lightweight CI agent.
  • Platform Constraints: Authentic cross-platform rendering (e.g., iOS Safari on a Linux CI agent) is physically impossible on a single local machine.

Since these operations inherently occur remotely, performing them synchronously would force the local test runner to idle while waiting for network round-trips and cloud processing.

To solve this, we designed the fixture around an asynchronous architecture:

  • Instant Capture: When eyes.check() is called, we synchronously capture the DOM and CSS resources (instead of a rasterized image). This operation is extremely fast.
  • Immediate Release: We purposefully use soft assertions by design. We release the Playwright test thread immediately so the functional logic can proceed to the next step or test case without blocking.
  • Background Heavy Lifting: The heavy work—uploading assets, rendering across different browsers and operating systems, and performing the AI comparison in the Applitools cloud—starts immediately in the background, managed by the Worker process.

The “Draining Queue” Effect

​This architecture explains why the Playwright Worker sometimes remains active after the final test completes.

​The background tasks are limited only by your account’s concurrency settings, and the screenshot size. For example, when rendering a 10,000 px page on a small mobile device, the rendering infrastructure might need time for scrolling and stitching. If your functional tests execute faster than the background workers can process the queue (rendering & comparing), the Worker process stays alive at the end solely to “drain the queue” and ensure data integrity.

While it does ensure your test logic runs at maximum speed, offloading the processing cost to the background, this experience might cause friction and frustration as the developers see that workers are “hanging” after tests are completed. When facing such issues, our support team is here to advise and assist with various solutions—we can investigate execution logs and if needed even make custom suggestions to tailor Eyes-Playwright to your needs.

Solving the Matrix Problem

​Standard Playwright documentation recommends defining multiple projects in playwright.config.ts to cover different browsers (Chromium, Firefox, WebKit) and various viewport sizes.

​While this ensures coverage, it introduces a linear performance penalty (O(N)). To test three browsers across two viewports, your CI must execute the functional logic (clicks, waits, navigation) six times. It’s 6x more load on the CI machine and the testing environment.

​We recommend shifting this workload to the Ultrafast Grid (UFG).

​In this mode, you execute the Playwright test once, typically on Chromium. We upload the DOM state, and our cloud infrastructure renders that state across all configured browsers and viewports in parallel.

This transforms an O(N) execution problem into an O(1) execution problem, significantly shortening the feedback loop.

The Strategy: failTestsOnDiff

​Since the actual comparison happens asynchronously and potentially completes after the test logic finishes, we need a mechanism to map the visual result back to the Playwright status.

​This is controlled by the failTestsOnDiff flag. It’s not just a boolean; it’s a strategic choice for your CI pipeline.

  • The Logic: This is the configuration our own Front-End team uses. We believe that Visual Change Test Failure.
  • Behavior: The Playwright test passes (Green). The unresolved status is reported externally via our SCM integration (GitHub/GitLab).
  • Why: Retrying a visual test is computationally wasteful—the pixels won’t change on the second run. By keeping the test “Green,” we avoid triggering Playwright’s retry mechanism. The decision is moved to the Pull Request, where it belongs.

Read more about SCM integration or hop directly to our GitHub, Bitbucket, Gitlab or Azure Devops articles.

  • The Logic: You need a “Red” pipeline to block deployment, but you want to avoid the noise of retries and gain a significant performance improvement.
  • Behavior: Individual tests pass, but the Worker Process exits with a failure code if any diffs were found in the suite.
  • Why: This provides a hard gatekeeper for the build status. It allows the Eyes rendering farms to continue processing visual test results in the background without blocking the execution thread, allowing the worker to move on to handle more tests efficiently.
  • The Logic: Immediate feedback loop.
  • Behavior: Fails the test immediately in the afterEach hook.
  • Why: Best for local development where you want to see the failure immediately in the console. It is also useful if you use the trace: retainOnFailure setting in Playwright, as it ensures traces are preserved for unresolved visual assertions. Not recommended for CI due to the retry loops described above.

TL;DR – When to use each setting

Mode afterEach afterAll false
Performance Less performant
The Playwright worker will wait after each test for all renders to be completed and for the visual AI to compare the results
Best performance
The Playwright workers will collect the resources and manage the rendering and Visual AI comparisons in the background
Best performance
Similar to afterAll
Observability Best
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as failing
Good
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as passing. You will get a failure of the worker process, and other reporters won’t link it to a specific test case.
Great in pull request (If SCM integration is enabled).
The Applitools reporter will reflect the tests perfectly. Other reporters will consider unresolved tests as passing.
Best fit Local testing Local testing AND
CI environments without SCM integration
CI environments with SCM integration

Closing the Visibility Gap: The Custom Reporter

​If you adopt Strategy A (false) or Strategy B (afterAll), you introduce a secondary challenge: Visibility.

Since Playwright technically marks these tests as Passed to avoid retries, the standard Playwright HTML Report will show them as “Green,” potentially masking unresolved visual differences that require attention.

​To bridge this gap without forcing developers to switch context, we developed a Custom Applitools Reporter.

​This reporter extends the standard Playwright HTML report. It injects the actual visual status (Passed, Failed, or unresolved) directly into the test results view.

  • True Status: You see which tests have visual diffs, even if the Playwright exit code was successful.
  • Direct Links: It provides a direct link from the test report to the specific batch results in the Applitools Dashboard.
  • Context: It enriches the report with UFG render status and batch information.

​This ensures you get the best of both worlds: The optimization of a “Green” CI run (no retries), with the transparency of a report that highlights exactly where manual review is needed.

Summary

​The Applitools Playwright fixture is designed to be non-blocking and scalable. By leveraging asynchronous architecture and Applitools UltraFast Grid, we offload the heavy lifting from your CI. By correctly configuring failTestsOnDiff, you ensure that your pipeline reflects your team’s engineering culture—whether that’s strict gating or modern, PR-based visual review.

Quick Answers

What is visual regression testing in Playwright

Visual regression testing in Playwright verifies that changes to an application’s UI do not introduce unintended visual differences. Playwright can perform basic visual regression checks using screenshot comparisons like toHaveScreenshot, while dedicated visual testing tools (such as Applitools Eyes) extend this by detecting meaningful UI changes, managing baselines, and enabling review workflows for approving visual updates.

What is the best way to do visual testing in Playwright?

Playwright supports basic visual testing through screenshot comparisons such as toHaveScreenshot, but this approach can become difficult to maintain at scale. Dedicated visual testing tools, like Applitools Eyes, extend Playwright by adding Visual AI comparison, cross-browser rendering, and review workflows that allow teams to detect visual regressions without maintaining large sets of screenshot baselines.

How does Playwright screenshot testing (toHaveScreenshot) compare to visual regression testing tools?

Playwright’s toHaveScreenshot performs pixel-by-pixel image comparisons against stored baseline images. While this works for simple cases, it often requires updating and maintaining many snapshots. Visual regression testing tools like Applitools Eyes use Visual AI to detect meaningful UI changes while ignoring insignificant rendering differences, provide review workflows to approve or reject visual changes, and allows custom match levels for different regions of the screen.

Can Playwright run visual tests across multiple browsers and devices

Yes, but with a limited scope. Natively, Playwright supports three browser engines (Chromium, Firefox, and WebKit), but it does not execute tests across different real operating systems or mobile devices. This lack of OS-level rendering limits coverage and imposes a risk of missing platform-specific visual bugs. For example, see how a frontend team caught a visual bug specific to Mac Retina screens that a standard engine check would miss.

How can you run cross-browser visual tests in Playwright without running tests multiple times?

Normally, cross-browser testing requires executing the same tests separately for each browser configuration. Tools like Applitools Ultrafast Grid allow tests to run once while visual rendering is executed across multiple browsers and viewport combinations in parallel. This removes the need to multiply test execution across the full browser matrix.

Why is cross-browser testing in Playwright so slow?

Natively, cross-browser testing introduces a significant performance penalty. Playwright must execute the entire test logic (clicks, waits, network requests) separately for every browser and viewport configuration. Modern visual testing tools (e.g., Applitools Ultrafast Grid) eliminate this overhead by executing the test logic just once locally, performing the cross-browser rendering and visual comparison in parallel in the cloud.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Accelerate Test Creation and Coverage with Code and No-Code Speed Runs https://app14743.cloudwayssites.com/blog/accelerate-test-creation-coverage-code-no-code/ Fri, 26 Sep 2025 15:53:00 +0000 https://app14743.cloudwayssites.com/?p=61492 Testing moves fast. See how teams use code and no-code speed runs to scale coverage, reduce maintenance, and deliver faster feedback with AI.

The post Accelerate Test Creation and Coverage with Code and No-Code Speed Runs appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
speedmtest creation and coverage with no-code flows

When testing needs to keep up with faster releases and growing complexity, the challenge isn’t just what to automate—it’s how fast you can create and validate reliable tests.

Code and no-code testing now work together to accelerate test creation, expand coverage, and deliver faster feedback across browsers and devices. By combining AI-assisted test creation with visual validation, you can go from setup to scale in hours instead of weeks.

A Smarter Way to Split Your Effort

High-performing teams balance two types of coverage:

  • 20% custom flow tests: Focused, AI-assisted checks for your most critical user journeys
  • 80% visual coverage: Full-page validation across browsers and devices with Visual AI

This approach ensures your key flows are verified with precision while everything else is continuously validated for layout, content, and visual consistency.

Full-Site Testing in Minutes

With Autonomous testing, you can point to any URL—or even a subfolder—and let AI do the rest. It crawls your sitemap, creates baselines, and runs cross-browser and cross-device tests automatically.

Setup takes minutes. You can schedule recurring tests daily or weekly, and catch both visual regressions and new pages as they appear.

During one large-scale migration, this approach tested more than 1,500 pages across five browsers and devices. Visual AI caught thousands of small layout changes, grouped them by pattern, and reduced the workload to just 10 unique issues after a single fix acceptance.

Depth Where It Matters

For the 20% that need fine-grained control, AI-assisted test authoring speeds up creation. You can describe each action in plain English—“add item to cart,” “verify success message,” or “fill out this form”—and the system turns those steps into repeatable tests.

AI assists by:

  • Generating realistic test data
  • Creating textual and visual assertions
  • Masking sensitive fields automatically

The result: fast, accurate flows that non-coders and engineers can both maintain.

Reliable Execution, Every Time

Applitools’ deterministic LLM executes steps based on visual descriptions, not fragile locators or XPath. That means if a class name or element ID changes, the test still runs correctly.

It also eliminates token costs and flaky reruns common with external LLM agents, since all logic runs natively inside the platform.

Data Validation Included

End-to-end validation doesn’t stop at the UI. Within the same test, you can call APIs, capture responses, and assert that backend data matches what appears on screen.

Visual results, API responses, and data integrity checks all happen within a single low-code environment.

Reuse More, Maintain Less

Reusable test flows—like login, cleanup, or environment switching—save time and cut duplication. You can parameterize roles or URLs, then reuse those flows across staging, integration, and production.

That modular structure lets QA, developers, and product teams collaborate without reinventing the same tests for each environment.

The Fast Track to Full Coverage

By combining AI-assisted test creation with Visual AI validation, teams achieve:

  • Broader coverage with less maintenance
  • Faster release confidence
  • Consistent, human-readable results

Whether you write code daily or prefer a visual test builder, this blended approach keeps quality high and bottlenecks low.

Try It Yourself

See how AI-assisted testing speeds up coverage for your own apps with Applitools Autonomous, or explore how Visual AI helps teams validate every page and device in minutes.

The post Accelerate Test Creation and Coverage with Code and No-Code Speed Runs appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Handling Animations and Loading Artifacts in Visual Testing https://app14743.cloudwayssites.com/blog/handling-animations-and-loading-artifacts-in-visual-testing/ Mon, 21 Jul 2025 18:12:29 +0000 https://app14743.cloudwayssites.com/?p=61002 Master dynamic content visual testing with our hands-on tutorial. Learn to capture rich UI experiences effectively.

The post Handling Animations and Loading Artifacts in Visual Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Stylized screenshot with half greyed out and other half colorized to highlight dynamic content

Have you ever encountered a situation where you try to take a screenshot, but instead of the beautifully well-crafted UI, all you’ve got is an image of a spinner/skeleton/loading screen? Handling animations and loading artifacts in visual testing can be daunting.

Don’t worry – it can happen to anyone, and we’re not here to judge you 😉

One of the SDK engineers here at Applitools, Noam, breaks this down into a hands-on tutorial, hoping it will help you get a better understanding of the industry’s best practices around visual testing of rich and dynamic UI experiences.

Let’s dive right in!

Framework Native Solutions

Most frameworks already have different mechanisms to handle animations and loading artifacts. Keeping things simple is often the best way to achieve code stability and maintainability. Using your framework’s built-in tools would most often be the best approach.

For example:

Playwright JS

// Playwright: wait for spinner to be removed
await page.waitForSelector('.spinner', { state: 'detached' });
await eyes.check()

Cypress

// Cypress: wait for spinner to not exist
cy.get('.spinner').should('not.exist'); 
cy.eyesCheck()

Selenium JS

// Selenium: wait for spinner to be invisible
const spinnerElements = await driver.findElements(By.css('.spinner'));
if (spinnerElements.length) {
  await driver.wait(until.elementIsNotVisible(spinnerElements[0]), 5000);
}
await eyes.check()

A Common Pitfall

Even if the UI appears visually unchanged, frontend frameworks like React, Vue, and Angular may re-render elements under the hood. This can lead to stale element references, especially when capturing regions right after a DOM change.

Consider the following example:

cy.get('.main').then($el => {
  cy.get('.spinner').should('not.exist'); // spinner disappears after main was located

  cy.eyesCheckWindow({
    tag: 'main',
    target: 'region',
    element: $el, // stale reference if .main was replaced
  });
});
  1. First, Cypress locates .main
  2. Then, Cypress waits for the spinner to disappear
  3. This example would fail (even if the new element has the exact same properties) if the main element is replaced by another element

How to avoid that?

When possible (e.g., Playwright), it’s preferred that you use locators instead of selectors. If you can’t, it’s better if you use selectors instead of DOM references (element: ‘.main’).

Videos, CSS Animations, GIFs

There are many techniques to eliminate other types of dynamic behaviour in web pages. Playwright, for example, provides a Clock API that allows pausing JavaScript time-related events (including JS-driven animations). It’s also possible to install custom CSS snippets to pause and reset CSS-related animations. Other JS specialized crafted snippets would be required for resetting GIFs, videos, and so on – you get the idea.

This never-ending cat-and-mouse game can be prevented by using Applitools Ultrafast Grid (UFG). Instead of rendering web pages on locally executed browsers, the UFG team maintains specialized logics and fine-tuned commands that ensure a stable and consistent rendering experience. While UFG offers more than just rendering stability, it’s worth noting that classic screenshots can still achieve stable results. UFG just makes it easier!

Algo-Based Solutions

If you intentionally want to capture dynamic content (e.g., animations, changes), a smarter strategy is to embrace that variability and use smart matching algorithms to compare just what you need, like those found in Applitools Eyes.

Any match level can be used for the entire screenshot or specific regions of the screen. Read more in the Match Level Best Practices tutorial. For example, algorithms like the Layout match level can drastically improve your experience with localization testing.

The waitBeforeCapture Setting

Performing wait operations can become more complicated when:

  1. Testing with no-code visual testing SDKs (e.g eyes-storybook)
  2. Testing with advanced Eyes features like lazyLoading and layoutBreakpoints

The waitBeforeCapture setting was invented for these types of use cases (and a few others).

This setting can receive three types of arguments:

  1. Milliseconds – the simplest approach. While it’s not always the most innovative or sophisticated pattern, in many cases, it “does the trick.” In general, waiting for explicit timeouts during tests is not recommended. However, when compared to clock manipulations and code injections, sometimes the simplicity and stability is worth the longer run-time.
  2. Selector – when we’re waiting for something to appear, most SDKs support passing a selector, and Applitools Eyes will automatically wait for an element that matches this selector to appear in the web page.
  3. Custom function – see code example
// eyes-storybook
  waitBeforeCapture: async () => {
    while (document.querySelector('.spinner')) {
      await new Promise((resolve) => setTimeout(resolve, 100))
    }
    return true;
  }

// eyes-playwright
await eyes.check({
  name: 'my-step',
  async waitBeforeCapture() {
    await page.locator('.spinner').waitFor({state: 'hidden'})
  },
})

The waitBeforeCapture setting can be defined in your applitools.config file, in your eyes.check settings, using the Target settings builder, and in other similar places. Please refer to the documentation of the specific SDK you’re using for concrete examples.

Storybook Play Functions

A nice eyes-storybook-specific trick to achieve a desired rendering state would be a Storybook Play Function.

Applitools Eyes will run your play functions and wait for them to finish before capturing anything on the screen. Use Play Functions to navigate the story to an interesting state and wait for the story to be stable inside the play function to help Eyes understand what the best time is to capture the screenshot.

Applitools is Here to Help

We hope you’ve found this article interesting, and maybe it solved some of the most common visual testing issues you may have encountered. Go ahead and try these examples out for free with Applitools Eyes.

However, if something isn’t clear or if you’d like advice regarding the best way to incorporate visual testing into your organization, please don’t hesitate to reach out to our experts! Testing is our passion, and we’re here to help.

Quick Answers

What are “loading artifacts” in visual testing, and how do I avoid flaky tests?

Loading artifacts are transient UI elements, like spinners, skeleton cards, GIFs, that appear while data is fetched. If a screenshot is captured before they disappear, your baseline image won’t match future renders, causing false failures (flaky tests).

Why do I get “stale element reference” errors after a React/Vue/Angular re‑render?

Modern frameworks often replace DOM nodes even when the UI looks identical. If you save a DOM reference (e.g., cy.get('.main')) before waiting for the spinner to vanish, that reference may point to a removed element, causing stale errors. Capture by selector or locator, not by saved element handles, to avoid this.

What is the waitBeforeCapture setting in Applitools, and what values can it accept?

waitBeforeCapture delays the screenshot after the DOM is stable. It accepts:
Milliseconds (e.g., 500)
CSS selector to wait for element presence/absence
Custom async function for complex logic (e.g., loop until .spinner hidden)

Can I use Storybook Play Functions to control the render state before visual testing?

Yes. In eyes‑storybook, Applitools runs each story’s Play Function and waits for it to finish—perfect for clicking buttons, filling forms, or pausing animations before the snapshot.


Is it better to fast-forward the JavaScript clock or add explicit waits for CSS animations?

Fast-forwarding the JS clock (e.g., page.clock.fastForward(1000) in Playwright) is usually more reliable and efficient than using hard timeouts. It advances timers without waiting in real time, making tests faster. However, it won’t affect CSS-driven animations since those still require CSS overrides to pause or skip transitions. For full stability, combine clock control with style injections or use Applitools Ultrafast Grid, which auto-handles CSS animations under the hood.

The post Handling Animations and Loading Artifacts in Visual Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Think You Have Full Test Coverage? Here Are 5 Gaps Most Teams Miss https://app14743.cloudwayssites.com/blog/expand-test-coverage-beyond-code-coverage/ Fri, 20 Jun 2025 16:44:46 +0000 https://app14743.cloudwayssites.com/?p=60839 Even with 100% code coverage, critical bugs still slip through. In this post, we explore five common gaps in software test coverage—from missed visual defects to untested browser variations—and how modern teams are using visual AI and no-code test automation to close them.

The post Think You Have Full Test Coverage? Here Are 5 Gaps Most Teams Miss appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

You’ve got your unit tests. Your end-to-end flows. Maybe even 100% code coverage. But bugs still slip through.

That’s because full code coverage doesn’t guarantee full test coverage. Visual glitches, browser inconsistencies, and content drift often escape traditional automation — and they’re exactly the kinds of issues your users notice first.

In The Coverage Overlook, the kickoff session of our Testing Your Way: Code & No-Code Journeys webinar series, we explored five critical coverage gaps most teams miss — and how to close them with AI-powered visual testing and no-code tools.

1. Visual and Layout Bugs

Code-based assertions won’t catch when an element shifts, disappears, or overlaps. That’s where Visual AI steps in.

By analyzing the rendered UI — not just the DOM — Visual AI identifies layout issues, missing images, overlapping text, and subtle visual defects with a single line of code (or none at all).

“Visual AI can instantly catch layout shifts, missing elements, and new text that coded assertions would miss — all without the maintenance burden of custom locators.”
Tim Hinds, Applitools

2. Cross-Browser and Device Inconsistencies

Most test suites default to Chrome. But real users span dozens of devices and browsers.

Visual AI tools like Applitools Eyes can validate your app across multiple browsers and screen sizes in parallel — using a single test run. No custom scripting required.

3. Dynamic Content Variations

Personalized content, A/B tests, and location-based content are tough to verify with scripted tests alone.

Visual AI combined with flexible match algorithms can confirm layout structure while ignoring safe visual differences — helping your team catch what matters, without writing exceptions for every variant.

4. Lower-Priority Flows and Pages

Teams tend to focus their test coverage on critical flows — like checkout or login — and leave lower-traffic pages untested.

No-code tools like Applitools Autonomous make it easy to cover the rest. A built-in crawler can scan your site and establish visual baselines across dozens (or hundreds) of pages — all without writing a single test script.

5. Accessibility Gaps

Code coverage can’t catch color contrast failures, missing labels, or overlapping elements that make your UI inaccessible.

Visual AI can. And with upcoming enforcement of the European Accessibility Act, now is the time to start catching these issues early.

Watch the Full Session On-Demand: Code & No-Code Journeys: The Coverage Overlook

Closing the Gap

Code coverage still has value — but modern teams are shifting toward user-centered test coverage.

As shared in the session, teams like Eversana are combining code-based, no-code, and visual testing strategies to expand coverage, accelerate feedback, and reduce risk. With this blended approach, they’ve achieved:

  • 65% reduction in regression testing time
  • 750+ hours saved per month
  • 90% test stability
  • A unified testing culture across manual testers, developers, and QA

What’s Next in the Series?

The journey continues with The Maintenance Shortcut, where we explore how teams are reducing flaky tests, eliminating brittle locators, and cutting test maintenance with Visual AI and Autonomous.


Quick Answers

Why isn’t 100% code coverage enough?

Code coverage measures lines executed, not what users see—visual defects, layout shifts, and browser differences can slip through.

Which testing gaps are most commonly missed?

Visual regressions, cross-browser/device inconsistencies, dynamic/personalized content, untested journeys, and accessibility issues.

How do modern teams close these gaps?

Use Visual AI (https://app14743.cloudwayssites.com/visual-ai) to validate pixels and Ultrafast Grid (https://app14743.cloudwayssites.com/ultrafast-grid) to scale UI checks across browsers/devices; add no-code flows with Autonomous to broaden coverage (https://app14743.cloudwayssites.com/platform/autonomous/).

What’s a practical first step in expanding test coverage?

Start by visual-validating your highest-traffic pages and critical journeys, then expand to your full cross-browser matrix.

The post Think You Have Full Test Coverage? Here Are 5 Gaps Most Teams Miss appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid https://app14743.cloudwayssites.com/blog/cross-browser-testing-at-applitools-using-ultrafast-grid/ Tue, 07 Feb 2023 16:43:16 +0000 https://app14743.cloudwayssites.com/?p=46837 This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a...

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Ultrafast Grid cross-browser testing

This is a story about how standard tests were not able to identify a bug, as the CSS and HTML files were valid, but when rendered on Chrome on a Mac images were not displayed correctly. Applitools Ultrafast Grid (UFG) helped us identify the bug at an early stage in development, before deploying the change. These types of bugs are a regular occurrence in any organization, and without UFG, these bugs can easily make it to production and remain there undetected until a customer complains about the problem. Translation support from Michael Sedley.

Front end development is complicated, and it involves a wide range of knowledge and tools to develop web applications. Regression testing across different systems, browsers, and devices makes it almost impossible to be sure that an application will display correctly on every system and there are no visual regressions as a result of a minor code change.

A real-life example occurred to me during my first week as an Applitools employee, when I fixed a minor bug using my Linux machine. Inadvertently, in the process, I created a more serious visual bug which was only visible on certain devices.

Had UFG not alerted me to the bug, the code would have gone to production and the result would have affected the usability of our flagship product on a Mac. This would have reflected badly on the professionalism of the company and would have affected the company representation, trust in our product, and sales.

Understanding the problem

In recent months, we improved Applitools Eyes’ ability to perform visual testing on images which are semi-transparent. In the past, Eyes would test an entire screen or defined region, but now using the Storybook SDK, users can automatically test each component separately, without needing to define a test for each component.

For example, when testing a gallery component, Eyes can identify visual bugs and regressions over all screen elements, including the appearance of buttons, controls, fonts, shadows, images, as well as backgrounds that include a transparency gradient.

Figure 1: Transparent Background

After implementing the transparency feature, a visual bug was reported.  In a screen capture of a semi-transparent screen region, unexpected grid lines appeared on top of the tested image.

The root cause of these lines wasn’t clear, so as a first step, we developed a test plan to reproduce the issue. I created a semi-transparent image, all gray (rgb = 127,127,127), with a constant alpha (transparency) channel (alpha=0.5). Fortunately, the bug was easily reproduced and I managed to create easily identifiable grid lines:

As I experimented with different transparency settings, it was clear that the color of the grid lines was the same as the color of the image, and it became stronger as the image transparency was lower.

After further investigation, I discovered that the image viewer component uses tiles to represent large images, and the tiles had one pixel overlap. In the past, when all images were RBG images with no transparency, the overlapping pixel was not visible to the human eye.  When I added semi-transparency, as the adjacent tiles were stacked on top of each other, sampling the outcoming color inside the grid line produced a 192 grayscale value, which is the exact outcome of stacking two half transparent gray layers over a white background:

(white ⋅ 0.5 + gray ⋅ 0.5) ⋅ 0.5 + gray ⋅ 0.5
given (white:=255, gray:=128)

Solving the preliminary bug

To resolve this bug, I recalculated the position and scale of each region so that there would not be an overlap and the line between regions would not appear.
For example, if the first tile had width: 480px and left: 0, the next adjacent tile should be positioned using left: 480, so that there is a zero pixel overlap between tiles.

I tested the results on my local (Linux) machine and assumed that the issue was resolved.

I didn’t realize that when I fixed this bug, I had also created a new issue which would have been almost impossible to anticipate.

How UFG identified the bug I created before deployment

At Applitools, we understand the importance of quality visual testing across browsers, so before deployment, every code change that impacts the Eyes interface must be tested by Applitools Eyes using UFG.

We are proud to “eat our own dogfood.” We rely on our visual testing tools to make sure that our products are visually perfect before release.

Our integration pipeline is configured to use UFG to test the UI change on multiple devices and screen settings so that we can confirm that the interface is consistent on every browser, operating system, and screen size.

We discovered that fixing the bug of a one pixel overlap created a new bug on certain systems where there was a gap visible between tiles.  Frustratingly, this bug was not reproducible in any of the devices used in development, and could not have been discovered with conventional manual visual testing.

The bug was only visible on screens with Retina Display which uses HiDPI.

What was interesting about this bug, is that it highlighted an inconsistency in the way that the same browser (Chrome) displays the same UI on different screen types.

What happened?

The bug and the solution

After some research, it turns out that there is (seemingly) a bug in the way Chrome behaves on Mac computers with Retina display (see 1, 2, 3). It turns out that using percentages or fractions of pixels for positioning and scaling of elements can lead to unexpected results.

So, what is the solution?

The solution itself is very elegant – all we had to do was to round each canvas scaling so that the canvas size would always be an integer:

scale = Math.round(scale * canvasSize) / canvasSize;

Thus, if the width of the canvas is 480 and our scale factor is 0.17, the width of our scaled canvas would not be 480 * 0.17 = 81.6, but would be 82 – this way we maintain compatibility with Retina displays and prevent unwanted gaps from being created.

This bug was easy to resolve once we were aware of it, but without UFG we would never have identified it using any of our test computers.

Conclusion

Maintaining a quality front end for all configurations is an ongoing challenge in every company and every organization.

Solving a bug for one audience can create a bigger bug for a wider audience. In this article, we saw a classic example of a malfunction, where the initial solution we implemented only made things worse.

The number of users who use Applitools Eyes for testing semi-transparent components is significantly lower than the number of eyes users who work with Retina displays (most Apple users) – so the initial approach we took to solve the problem could have caused significantly more harm than good. Even worse – we could have caused significant damage to the user experience, and not know about it. No modern organization wants to rely on frustrated customer feedback to discover bugs in their application or websites.

Using UFG reduces the likelihood that errors of this type will pass under the radar and allows developers, product managers, and all stakeholders in the development process to significantly reduce the fear factor in deploying new features. The UFG is insurance against platform-dependent visual bugs and provides the ability to perform true multi-platform coverage.

Don’t wait to discover your visual bugs from user reports. We invite you to try UFG – our team of experts is here to help with any questions or problems, and assist you migrate Applitools Eyes and UFG into your integration pipeline. For more information, see Introduction to the Ultrafast Grid in the Applitools Knowledge Center.

The post How We Identified and Resolved a Bug Before Release Using Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Ultrafast Cross Browser Testing with Selenium Java https://app14743.cloudwayssites.com/blog/cross-browser-testing-selenium/ Fri, 09 Sep 2022 15:51:52 +0000 https://app14743.cloudwayssites.com/?p=42442 Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Learn why cross-browser testing is so important and an approach you can take to make cross-browser testing with Selenium much faster.

What is Cross Browser Testing?

Cross-browser testing is a form of functional testing in which an application is tested on multiple browsers (Chrome, Firefox, Edge, Safari, IE, etc.) to validate that functionality performs as expected.

In other words, it is designed to answer the question: Does your app work the way it’s supposed to on every browser your customers use?

Why is Cross Browser Testing Important?

While modern browsers generally conform to key web standards today, important problems remain. Differences in interpretations of web standards, varying support for new CSS or other design features, and rendering discrepancies between the different browsers can all yield a user experience that is different from one browser to the next.

A modern application needs to perform as expected across all major browsers. Not only is this a baseline user expectation these days, but it is critical to delivering a positive user experience and a successful app.

At the same time, the number of screen combinations (between screen sizes, devices and versions) is rising quickly. In recent years the number of screens required to test has exploded, rising to an industry average of 81,480 screens and reaching 681,296 for the top 30% of companies.

Ensuring complete coverage of each screen on every browser is a common challenge. Effective and fast cross-browser testing can help alleviate the bottleneck from all these screens that require testing.

Source: 2019 State of Automated Visual Testing

How to Perform Modern Cross Browser Testing in Selenium with Visual Testing

Traditional approaches to cross-browser testing in Selenium have existed for a while, and while they still work, they have not scaled well to handle the challenge of complex modern applications. They can be time-consuming to build, slow to execute and challenging to maintain in the face of apps that change frequently.

Applitools Developer Advocate and Test Automation University Director Andrew Knight (AKA Pandy Knight) recently conducted a hands-on workshop where he explored the history of cross-browser testing, its evolution over time and the pros and cons of different approaches.

Andrew then explores a modern cross-browser testing solution with Selenium and Applitools. He walks you through a live demo (which you can replicate yourself by following his shared Github repo) and explains the benefits and how to get started. He also covers how you can accelerate test automation with integration into CI/CD to achieve Continuous Testing.

Check out the workshop below, and follow along with the Github repo here.

More on Cross Browser Testing in Cypress, Playwright or Storybook

At Applitools we are dedicated to making software testing faster and easier so that testers can be more effective and apps can be visually perfect. That’s why we use our industry-leading Visual AI and built the Applitools Ultrafast Grid, a key component of the Applitools Test Cloud that enables ultrafast cross-browser testing. If you’re looking to do cross-browser testing better but don’t use Selenium, be sure to check out these links too for more info on how we can help:

The post Ultrafast Cross Browser Testing with Selenium Java appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Cross Browser Testing? Examples & Best Practices https://app14743.cloudwayssites.com/blog/guide-to-cross-browser-testing/ Thu, 14 Jul 2022 19:20:00 +0000 https://app14743.cloudwayssites.com/?p=33935 Learn everything you need to know about cross browser testing, including examples, a comparison of different implementation options and how to get started.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In this guide, learn everything you need to know about cross-browser testing, including examples, a comparison of different implementation options and how you can get started with cross-browser testing today.

What is Cross Browser Testing?

Cross Browser Testing is a testing method for validating that the application under test works as expected on different browsers, at varying viewport sizes, and devices. It can be done manually or as part of a test automation strategy. The tooling required for this activity can be built in-house or provided by external vendors.

Why is Cross Browser Testing Important?

When I began in QA I didn’t understand why cross-browser testing was important. But it quickly became clear to me that applications frequently render differently at different viewport sizes and with different browser types. This can be a complex issue to test effectively, as the number of combinations required to achieve full coverage can become very large.

A Cross Browser Testing Example

Here’s an example of what you might look for when performing cross-browser testing. Let’s say we’re working on an insurance application. I, as a user, should be able to view my insurance policy details on the website, using any browser on my laptop or desktop. 

This should be possible while ensuring:

  • The features remain the same
  • The look and feel, UI or cosmetic effects are the same
  • Security standards are maintained

How to Implement Cross Browser Testing 

There are various aspects to consider while implementing your cross-browser testing strategy.

Understand the scope == Data!

“Different devices and browsers: chrome, safari, firefox, edge”

Thankfully IE is not in the list anymore (for most)!

You should first figure out the important combinations of devices and browsers and viewport sizes your userbase is accessing your application from. 

PS: Each team member should have access to the analytics data of the product to understand patterns of usage of the product. This data, which includes OS, browser details (type, version, viewport sizes) are essential to plan and test proactively, instead of later reacting to situations (= defects).

This will tell you the different browser types, browser versions, devices, viewport sizes you need to consider in your testing and test automation strategy.

Cross Browser Testing Techniques

There are various ways you can perform cross-browser testing. Let’s understand them.

Local Setup -> On a Single (Dev / QA Machine)

We usually have multiple browsers on our laptop / desktops. While there are other ways to get started, it is probably simplest to start implementing your cross browser tests here. You also need a local setup to enable debugging and maintaining / updating the tests. 

If mobile-web is part of the strategy, then you also need to have the relevant setup available on local machines to enable that.

Setting up the Infrastructure

While this may seem the easiest, it can get out of control very quickly. 

Examples:

  • You may not be able to install all supported browsers on your computer (ex: Safari is not supported on Windows OS). 
  • Browser vendors keep releasing new versions very frequently. You need to keep your browser drivers in sync with this.
  • Maintaining / using older versions of the browsers may not be very straightforward.
  • If you need to run tests on mobile devices, you may not have access to all the variety of devices. So setting up local emulators may be a way to proceed.

The choices can actually vary based on the requirements of the project and on a case by case basis.

As alternatives, we have the liberty to choose and create either an in-house testing solution, or go for a platform / license / third party tool to support our device farm needs.

In-House Setup of Central Infrastructure

You can set up a central infrastructure of browsers and emulators or real devices in your organization that can be leveraged by the teams. You will also need some software to manage the usage and allocation of these browsers and devices. 

This infrastructure can potentially be used in the following ways:

  • Triggered from local machine
    Tests can be triggered from any dev / QA machine to run on the central infrastructure.
  • For CI execution
    Tests triggered via Continuous Integration (CI), like Jenkins, CircleCI, Azure DevOps, TeamCity, etc. can be run against browsers / emulators setup on the central infrastructure. 

Cloud Solution    

You can also opt to run the tests against browsers / devices in a cloud-based solution. You can select different device / browser options offered by various providers in the market that give you the wide coverage as per your requirements, without having to build / maintain / manage the same. This can also be used to run tests triggered from local machines, or from CI.

Modern, AI-Based Cross Browser Testing Solution: Applitools Ultrafast Test Cloud 

It is important to understand the evolution of browsers in recent years. 

  • They have started conforming to the W3C standard. 
  • They seem to have started adopting Continuous Delivery – well, at least releasing new versions at a very fast pace, sometimes multiple versions a week.
  • In a major development a lot of major browsers are adopting and building on the Chromium codebase. This makes these browsers very similar, except the rendering part – which is still pretty browser specific.

We need to factor this change in our cross browser testing strategy. 

In addition, AI-based cross-browser testing solutions are becoming quite popular, which use machine learning to help scale your automation execution and get deep insights into the results – from a functional, performance and user-experience perspective.

To get hands-on experience in this, I signed-up for a free Applitools account, which uses a powerful Visual AI, and implemented a few tests using this tutorial as a reference.

How Does Applitools Visual AI Work as a Solution for Cross Browser Testing

Integration with Applitools

Integrating Applitools with your functional automation is extremely easy. Simply select the relevant Applitools SDK based on your functional automation tech stack from here, and follow the detailed tutorial to get started.

Now, at any place in your test execution where you need functional or visual validation, add methods like eyes.checkWindow(), and you are set to run your test against any browser or device of your choice.

Reference: https://app14743.cloudwayssites.com/tutorials/overview/how-it-works.html

AI-Based Cross Browser Testing

Now you have your tests ready and running against a specific browser or device, scaling for cross-browser testing is the next step.

What if I told you with just the addition of the different device combinations, you can leverage the same single script to give you the functional and visual test results on the variety of combinations specified, covering the cross browser testing aspect as well.

Seems too far-fetched?

It isn’t. That is exactly what Applitools Ultrafast Test Cloud does!

The addition of lines of code below will do the magic. You can also go about changing the configurations, as per your requirements. 

(Below example is from the Selenium-Java SDK. Similar configuration can be supplied for the other SDKs.)

// Add browsers with different viewports
config.addBrowser(800, 600, BrowserType.Chrome);
config.addBrowser(700, 500, BrowserType.FIREFOX);
config.addBrowser(1600, 1200, BrowserType.IE_11);
config.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
config.addBrowser(800, 600, BrowserType.SAFARI);

// Add mobile emulation devices in Portrait mode
config.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT;
config.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT;

// Set the configuration object to eyes
eyes.setConfiguration(config);

Now when you run the test again, say against Chrome browser on your laptop, in the Applitools dashboard, you will see results for all the browser and device combinations provided above.

You may be wondering, the test ran just once on the Chrome browser. How did the results from all other browsers and devices come up? And so fast?

This is what Applitools Ultrafast Grid (a part of the Ultrafast Test Cloud) does under the hood:

  • When the test starts, the browser configuration is passed from the test execution to the Ultrafast Grid.
  • For every eyes.checkWindow call, the information captured (DOM, CSS, etc.) is sent to the Ultrafast Grid.
  • The Ultrafast Grid will render the same page / screen on each browser / device provided by the test – (think of this as playing a downloaded video in airplane mode).
  • Once rendered in each browser / device, a visual comparison is done and the results are sent to the Applitools dashboard.

What I like about this AI-based solution, is that:

  • I create my automation scripts for different purposes – functional, visual, cross browser testing, in one go
  • There is no need of maintaining devices 
  • There is no need to create different set-ups for different types of testing
  • The AI algorithms start providing results from the first run – “no training required”
  • I can leverage the solution on any kind of setup 
    • i.e. running the scripts through my IDE, terminal, or CI/CD 
  • I can leverage the solution for web, mobile web, and native apps
  • I can integrate Visual Testing results in as part of my CI execution
  • Rich information available in the dashboard including ease of updating the baselines, doing Root Cause Analysis, reporting defects in Jira or Rally, etc.
  • I can ensure there are no Contrast issues (part of Accessibility testing) in my execution at scale

Here is the screenshot of the Applitools dashboard after I ran my sample tests:

Cross Browser Testing Tools and Applitools Visual AI

The Ultrafast Test Grid and Applitools Visual AI can be integrated into many popular and free and open source test automation frameworks to easily supercharge their effectiveness as cross-browser testing tools.

Cross Browser Testing in Selenium

As you saw above in my code sample, Ultrafast Grid is compatible with Selenium. Selenium is the most popular open source test automation framework. It is possible to perform cross browser testing with Selenium out of the box, but Ultrafast Grid offers some significant advantages. Check out this article for a full comparison of using an in-house Selenium Grid vs using Applitools.

Cross Browser Testing in Cypress

Cypress is another very popular open source test automation framework. However, it can only natively run tests against a few browsers at the moment – Chrome, Edge and Firefox. The Applitools Ultrafast Grid allows you to expand this list to include all browsers. See this post on how to perform cross-browser tests with Cypress on all browsers.

Cross Browser Testing in Playwright

Playwright is an open source test automation framework that is newer than both Cypress and Selenium, but it is growing quickly in popularity. Playwright has some limitations on doing cross-browser testing natively, because it tests “browser projects” and not full browsers. The Ultrafast Grid overcomes this limitation. You can read more about how to run cross-browser Playwright tests against any browser.

Pro and Cons of Each Technique (Table of Comparison)

Local SetupIn-House Setup Cloud SolutionAI-Based Solution (Applitools)
InfrastructurePros: 
Fast feedback on local machine
Cons: 
Needs to be repeated for each machine where the tests need to execute
All configurations cannot be set up locally
Pros: 
No inbound / outbound connectivity required
Cons: 
Needs considerable effort to set up, maintain and update the infrastructure on a continued basis
Pros:
No efforts required build / maintain / update the infrastructure
Cons:
Needs inbound and outbound connectivity from internal network
Latency issues may be seen as requests are going to cloud based browsers / devices
Pros:
No effort required to setup
Setup and MaintenanceTo be taken care of by each team member from time to time; including OS/ Browser version updatesTo be taken care of by the internal team from time to time; including OS/ Browser version updatesTo be taken care of by the service providerTo be taken care of by the service provider
Speed of FeedbackSlowest, as all dependencies to be taken care of, and test needs to be repeated for each browser / device combinationDepends on concurrent usage due to multiple test runsDepends on network latency
Network issues may cause intermittent failures
Depends on reliability and connectivity of the service provider
Fast and seamless scaling
Security Best as in-house, using internal firewalls, vpns, network and data storageBest as in-house, using internal firewalls, vpns, network and data storageHigh Risk: Needs inbound network access from service provider to the internal test environments.
Browsers / devices will have access to the data generated by running the test – cleanup is essential.
No control who has access to the cloud service provider infrastructure, and if they access your internal resources.
Low risk. There is no inbound connection to your internal infrastructure.
Tests are running on internal network – so no data on Applitools server (other than screenshots used for comparison with baseline) 

My Learning from this Experience

  • A good cross browser testing strategy allows you to reduce the risk of functionality and visual experience not working as expected on the browsers and devices used by your users. A good strategy will also optimize the testing efforts required to do this. To allow this, you need data to provide the insights from your users.
  • Having a holistic view of how your team will be leveraging cross browser testing (ex: manual testing, automation, local executions, CI-based execution, etc.) is important to know before you start off with your implementation.
  • Sometimes the easiest way may not be the best – ex: Using the browsers on your computer to automate against that will not scale. At the same time, using technology like Applitools Ultrafast Test Cloud is very easy – you end up writing less code and get increased functional and visual coverage at scale. 
  • You need to think about the ROI of your approach and if it achieves the objectives of the need for cross browser testing. ROI calculation should include:
    • Effort to implement, maintain, execute and scale the tests
    • Effort to set up, and maintain the infrastructure (hardware and software components)
    • Ability to get deterministic & reliable feedback from from test execution

Summary

Depending on your project strategy, scope, manual or automation requirements and of course, the hardware or infrastructure combinations, you should make a choice that not only suits the requirements but gives you the best returns and results. 

Based on my past experiences, I am very excited about the Applitools Ultrafast Test Cloud – a unique way to scale test automation seamlessly. In the process, I ended up writing less code, and got amazingly high test coverage, with very high accuracy. I recommend everyone to try this and experience it themselves!

Get Started Today

Want to get started with Applitools today? Sign up for a free account and check out our docs to get up and running today, or schedule a demo and we’ll be happy to answer any questions you may have.

Editor’s Note: This post was originally published in January 2022, and has been updated for accuracy and completeness.

The post What is Cross Browser Testing? Examples & Best Practices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://app14743.cloudwayssites.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://app14743.cloudwayssites.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://app14743.cloudwayssites.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://app14743.cloudwayssites.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Our Best Test Automation Videos of 2022 (So Far) https://app14743.cloudwayssites.com/blog/best-test-automation-videos-2022/ Fri, 20 May 2022 21:07:55 +0000 https://app14743.cloudwayssites.com/?p=38624 Check out some of our most popular events of the year. All feature test automation experts sharing their knowledge and their stories.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

We’re approaching the end of May, which means we’re just a handful of weeks the midpoint of 2022 already. If you’re like me, you’re wondering where the year has gone. Maybe it has to do with life in the northeastern US where I live, where we’ve really just had our first week of warm weather. Didn’t winter just end?

As always, the year is flying by, and it can be hard to keep up with all the great videos or events you might have wanted to watch or attend. To help you out, we’ve rounded up some of our most popular test automation videos of 2022 so far. These are all top-notch workshops or webinars with test automation experts sharing their knowledge and their stories – you’ll definitely want to check them out.

Cross Browser Test Automation

Cross-browser testing is a well-known challenge to test automation practitioners. Luckily, Andy Knight, AKA the Automation Panda, is here to walk you through a modern approach to getting it done. Whether you use Cypress, Playwright, or are testing Storybook components, we have something for you.

Modern Cross Browser Testing with Cypress

For more, see this blog post: How to Run Cross Browser Tests with Cypress on All Browsers (plus bonus post specifically covering the live Q&A from this workshop).

Modern Cross Browser Testing in JavaScript Using Playwright

For more, see this blog post: Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser.

Modern Cross Browser Testing for Storybook Components

For more, see this blog post: Testing Storybook Components in Any Browser – Without Writing Any New Tests!

Test Automation with GitHub or Chrome DevTools

GitHub and Chrome DevTools are both incredibly popular with the developer and testing communities – odds are if you’re reading this you use one or both on a regular basis. We recently spoke with developer advocates Rizel Scarlett of GitHub and Jecelyn Yeen of Google as they explained how you can leverage these extremely popular tools to become a better tester and improve your own testing experience. Click through for more info about each video and get watching.

Make Testing Easy with GitHub

For more, see this blog post: Using GitHub Copilot to Automate Tests.

Automating Tests with Chrome DevTools Recorder

For more, see this blog post: Creating Your First Test With Google Chrome DevTools Recorder.

Test Automation Stories from Our Customers

When it comes to implementing and innovating around test automation, you’re never alone, even though it doesn’t always feel that way. Countless others are struggling with the same challenges that you are and coming up with solutions. Sometimes all it takes is hearing how someone else solved a similar problem to spark an idea or gain a better understanding of how to solve your own.

Accelerating Visual Testing

Nina Westenbrink, Software Engineer at a leading European telecom, talks about how the visual time to test the company’s design system was decreased and simplified, offering helpful tips and tricks along the way. Nina also speaks about her career as a woman in testing and how to empower women and overcome biases in software engineering.

Continuously Testing UX for Enterprise Platforms

Govind Ramachandran, Head of Testing and Quality Assurance for Asia Technology Services at Manulife Asia, discusses challenges around UI/UX testing for enterprise-wide digital programs. Check out his blueprint for continuous testing of the customer experience using Figma and Applitools.

This is just a taste of our favorite videos that we’ve shared with the community from 2022. What were yours? You can check out our full video library here, and let us know your own favorites @Applitools.

The post Our Best Test Automation Videos of 2022 (So Far) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Testing Storybook Components in Any Browser – Without Writing Any New Tests! https://app14743.cloudwayssites.com/blog/storybook-components-cross-browser-testing/ Thu, 31 Mar 2022 20:28:00 +0000 https://app14743.cloudwayssites.com/?p=36060 Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Learn how to automatically do ultrafast cross-browser testing for Storybook components without needing to write any new test automation code.

Let’s face it: modern web apps are complex. If a team wants to provide a seamless user experience on a deadline, they need to squeeze the most out of the development resources they have. Component libraries help tremendously. Developers can build individual components for small things like buttons and big things like headers to be used anywhere in the frontend with a consistent look and feel.

Storybook is one of the most popular tools for building web components. It works with all the popular frameworks, like React, Angular, and Vue. With Storybook, you can view tweaks to components as you develop their “stories.” It’s awesome! However, manually inspecting components only works small-scale when you, as the developer, are actively working on any given component. How can a team test their Storybook components at scale? And how does that fit into a broader web app testing strategy?

What if I told you that you could automatically do cross-browser testing for Storybook components without needing to define any new tests or write any new automation code? And what if I told you that it could fit seamlessly into your existing development workflow? You can do this with the power of Applitools and your favorite CI tool! Let’s see how.

Adding Visual Component Testing to Your Strategy

Historically, web app testing strategies divide functional testing into three main levels:

  1. Unit testing
  2. Integration testing for APIs
  3. End-to-end testing for UIs and APIs

These three levels make up the classic Testing Pyramid. Each level of testing mitigates a unique type of risk. Unit tests pinpoint problems in code, integration tests catch problems where entities meet, and end-to-end tests exercise behaviors like a user.

The rise of frontend component libraries raises an interesting question: Where do components belong among these levels? Components are essentially units of the UI. In that sense, they should be tested individually as “UI units” to catch problems before they become widespread across multiple app views. One buggy component could unexpectedly break several pages. However, to test them properly, they should be rendered in a browser as if they were “live.” They might even call APIs indirectly. Thus, arguably, component testing should be sandwiched between traditional integration and end-to-end testing.

Web app testing levels, showing where component testing belongs in relation to other levels.

Wait, another level of testing? Nobody has time for that! It’s hard enough to test adequate coverage at the three other levels, let alone automate those tests. Believe me, I understand the frustration. Unfortunately, component libraries bring new risks that ought to be mitigated.

Thankfully, Applitools provides a way to visually test all the components in a Storybook library with the Applitools Eyes SDK for Storybook. All you need to do is install the @applitools/eyes-storybook package into your web app project, configure a few settings, and run a short command to launch the tests. Applitools Eyes will turn each story for each component into a visual test case. On the first run, it will capture a visual snapshot for each story as a “baseline” image. Then, subsequent runs will capture “checkpoint” snapshots and use Visual AI to detect any changes. You don’t need to write any new test code – tests become a side effect of creating new components and stories!

In this sense, visual component testing with Applitools is like autonomous testing. Test generation and execution is completely automated, and humans review the results. Since testing can be done autonomously, component testing is easy to add to an existing testing strategy. It mitigates lots of risk for low effort. Since it covers components very well, it can also reduce the number of tests at other layers. Remember, the goal of a testing strategy is not to cover all the things but rather to use available resources to mitigate as much risk as possible. Covering a whole component library with an autonomous test run frees up folks to focus on other areas.

Adding Applitools Eyes to Your Web App

Let’s walk through how to set up visual component tests for a Storybook library. You can follow the steps below to add visual component tests to any web app that has a Storybook library. Give it a try on one of your own apps, or use my example React app that I’ll use as an example below. You’ll also need Node.js installed as a prerequisite.

To get started, you’ll need an Applitools account to run visual tests. If you don’t already have an Applitools account, you can register for free using your email or GitHub account. That will let you run visual tests with basic features.

Once you get your account, store your API key as an environment variable. On macOS or Linux, use this command:

export APPLITOOLS_API_KEY=<your-api-key>

On Windows:

set APPLITOOLS_API_KEY=<your-api-key>

Next, you need to add the eyes-storybook package to your project. To install this package into a new project, run:

npm install --save-dev @applitools/eyes-storybook

Finally, you’ll need to add a little configuration for the visual tests. Add a file named applitools.config.js to the project’s root directory, and add the following contents:

module.exports = {
    concurrency: 1,
    batchName: "Visually Testing Storybook Components"
}

The concurrency setting defines how many visual snapshot comparisons the Applitools Ultrafast Test Cloud will perform in parallel. (With a free account, you are limited to 1.) The batchName setting defines a name for the batch of tests that will appear in the Applitools dashboard. You can learn about these settings and more under Advanced Configuration in the docs.

That’s it! Now, we’re ready to run some tests. Launch them with this command:

npx eyes-storybook

Note: If your components use static assets like image files, then you will need to append the -s option with the path to the directory for static files. In my example React app, this would be -s public.

The command line will print progress as it tests each story. Once testing is done, you can see all the results in the Applitools dashboard:

Results for visual component tests establishing baseline snapshots.

Run the tests a second time for checkpoint comparisons:

Results for visual component tests comparing checkpoints to baselines.

If you change any of your components, then tests should identify the changes and report them as “Unresolved.” You can then visually compare differences side-by-side in the Applitools dashboard. Applitools Eyes will highlight the differences for you. Below is the result when I changed a button’s color in my React app:

Comparing visual differences between two buttons after changing the color.

You can give the changes a thumbs-up if they are “right” or a thumbs-down if they are due to a regression. Applitools makes it easy to pinpoint changes. It also provides auto-maintenance features to minimize the number of times you need to accept or reject changes.

Adding Cross-Browser Tests for All Components

When Applitools performs visual testing, it captures snapshots from tests running on your local machine, but it does everything else in the Ultrafast Test Cloud. It rerenders those snapshots – which contain everything on the page – against different browser configurations and uses Visual AI to detect any changes relative to baselines.

If no browsers are specified for Storybook components, Applitools will run visual component tests against Google Chrome running on Linux. However, you can explicitly tell Applitools to run your tests against any browser or mobile device.

You might not think you need to do cross-browser testing for components at first. They’re just small “UI units,” right? Well, however big or small, different browsers render components differently. For example, a button may have rectangular edges instead of round ones. Bigger components are more susceptible to cross-browser inconsistencies. Think about a navbar with responsive rendering based on viewport size. Cross-browser testing is just as applicable for components as it is for full pages.

Configuring cross-browser testing for Storybook components is easy. All you need to do is add a list of browser configs to your applitools.config.js file like this:

module.exports = {
  concurrency: 1,
  batchName: "Visually Testing Storybook Components",
  browser: [
    // Desktop
    {width: 800, height: 600, name: 'chrome'},
    {width: 700, height: 500, name: 'firefox'},
    {width: 1600, height: 1200, name: 'ie11'},
    {width: 1024, height: 768, name: 'edgechromium'},
    {width: 800, height: 600, name: 'safari'},
    // Mobile
    {deviceName: 'iPhone X', screenOrientation: 'portrait'},
    {deviceName: 'Pixel 2', screenOrientation: 'portrait'},
    {deviceName: 'Galaxy S5', screenOrientation: 'portrait'},
    {deviceName: 'Nexus 10', screenOrientation: 'portrait'},
    {deviceName: 'iPad Pro', screenOrientation: 'landscape'},
  ]
}

This declaration includes ten unique browser configurations: five desktop browsers with different viewport sizes, and five mobile devices with both portrait and landscape orientations. Every story will run against every specified browser. If you run the test suite again, there will be ten times as many results!

Results from running visual component tests across multiple browsers.

As shown above, my batch included 90 unique test instances. Even though that’s a high number of tests, Applitools Ultrafast Test Cloud ran them in only 32 seconds! That really is ultrafast for UI tests.

Running Visual Component Tests Autonomously

Applitools Eyes makes it easy to run visual component tests, but to become truly autonomous, these tests should be triggered automatically as part of regular development workflows. Any time someone makes a change to these components, tests should run, and the team should receive feedback.

We can configure Continuous Integration (CI) tools like Jenkins, CircleCI, and others for this purpose. Personally, I like to use GitHub Actions because they work right within your GitHub repository. Here’s a GitHub Action I created to run visual component tests against my example app every time a change is pushed or a pull request is opened for the main branch:

name: Run Visual Component Tests

on:
  push:
  pull_request:
    branches:
      - main

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2
      
      - name: Set up Node.js
        uses: actions/setup-node@v2

      - name: Install dependencies
        run: npm install

      - name: Run visual component tests
        run: npx eyes-storybook -s public
        env:
          APPLITOOLS_API_KEY: ${{ secrets.APPLITOOLS_API_KEY }}

The only extra configuration needed was to add my Applitools API key as a repository secret

Maximizing Your Testing Value

Components are just one layer of complex modern web apps. A robust testing strategy should include adequate testing at all levels. Thankfully, visual testing with Applitools can take care of the component layer with minimal effort. Unit tests can cover how the code works, such as a component’s play method. Integration tests can cover API requests, and end-to-end tests can cover user-centric behaviors. Tests at all these levels together provide great protection for your app. Don’t neglect any one of them!

The post Testing Storybook Components in Any Browser – Without Writing Any New Tests! appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
How to Visually Test a Remix App with Applitools and Cypress https://app14743.cloudwayssites.com/blog/how-to-visually-test-remix-app-applitools-cypress/ Tue, 22 Mar 2022 20:59:50 +0000 https://app14743.cloudwayssites.com/?p=35712 Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.

The post How to Visually Test a Remix App with Applitools and Cypress appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Is Remix too new to be visually tested? Let’s find out with Applitools and Cypress.

In this blog post, we answer a single question: how to best visually test a Remix-based app? 

We walk through Remix and build a demo app to best showcase the framework. Then, we take a deep dive into visual testing with Applitools and Cypress. We close on scaling our test coverage with the Ultrafast Test Cloud to perform cross-browser validation of the app.

So let our exciting journey begin in pursuit of learning how to visually test the Remix-based app.

What is Remix?

Remix Logo

Web development is an ever-growing space with almost as many ways to build web apps as there are stars in the sky. And it ultimately translates into just how many different User Interface (UI) frameworks and libraries there are. One such library is React, which most people in the web app space have heard about, or even used to build a website or two. 

For those unfamiliar with React, it’s a declarative, component-based library that developers can use to build web apps across different platforms. While React is a great way to develop robust and responsive UIs, many moving pieces still happen behind the scenes. Things like data loading, routing, and more complex work like Server-Side Rendering are what a new framework called Remix can handle for React apps.

Remix is a full-stack web framework that optimizes data loading and routing, making pages load faster and improving overall User Experience (UX). The days are long past when our customers would wait minutes while a website reloads, while moving from one page to another, or expecting an update on their feed. Features like Server-Side Rendering, effective routing, and data loading have become the must for getting our users the experience they want and need. The Remix framework is an excellent open-source solution for delivering these features to our audience and improving their UX.

What Does Remix Mean For UI Testing?

Testing Remix with Cypress and Applitools

Our end-users shouldn’t care what framework we used to build a website. What matters to our users is that the app works and lets them achieve their goals as fast as possible. In the same way, the testing principles always remain the same, so UI testing shouldn’t be impacted by the frameworks used to create an app. The basics of how we test stay the same although some testing aspects could change. For example, in the case of an Angular app, we might need to adjust how we wait for the site to fully load by using a specialized test framework like Protractor.

Most tests follow a straightforward pattern of Arrange, Act, and Assert. Whether you are writing a unit test, an integration test, or an end-to-end test, everything follows this cycle of setting up the data, running through a set of actions and validating the end state.

When writing these end-to-end tests, we need to put ourselves in the shoes of our users. What matters most in this type of testing is replicating a set of core use-cases that our end-users go through. It could be logging into an app, writing a new post, or navigating to a new page. That’s why UI test automation frameworks like Applitools and Cypress are fantastic for testing – they are largely agnostic of the platform they are testing. With these tools in hand, we can quickly check Remix-based apps the same way we would test any other web application.

What about Remix and Visual Testing?

The main goal of testing is to confirm the app’s behavior that our users see and go through. This reason is why simply loading UI elements and validating inner text or styling is not enough. Our customers are not interested in HTML or CSS. What they care about is what they can see and interact with on our site, not the code behind it. It’s not enough for a robust coverage of the complex UI that modern web apps have. We can close this gap with visual testing. 

Contrasting what our tests see (with an image of code) with what our customers see (with an image of a website's UI).
Functional vs Visual Testing Perspective

Visual testing allows us to see our app from our customers’ point of view. And that’s where the Applitools Eyes SDK comes in! This visual testing tool can enhance the existing end-to-end test coverage to ensure our app is pixel-perfect.

To simplify, what Applitools does for us is that it allows developers to effectively compare visual elements across various screens to find visible defects. Applitools can record our UI elements in their platform and then monitor any visual regressions that our customers might encounter. More specifically, this testing framework exposes the visible differences between baseline snapshots and future snapshots.

Applitools has integrations with numerous testing platforms like Cypress, WebdriverIO, Selenium, and many others. For this article, we will showcase Applitools with Cypress to add visual test coverage to our Remix app.

Introducing Remix Demo App

We can’t talk about a framework like Remix without seeing it in practice. That’s why we put together a demo app to best showcase Remix and later test it with Applitools and Cypress.

A screenshot of a Remix demo app
Remix Demo App

We based this app on the Remix Developer Blog app that highlights the core functionalities of Remix: data loading, actions, redirects, and more. We shared this demo app and all the tests we cover in this article in this repository so that our readers can follow along.

Running Demo App 

Before diving into writing tests, we must ensure that our Remix demo application is running.

To start, we need to clone a project from this repository:

git clone https://github.com/dmitryvinn/remix-demo-app-applitools

Then, we navigate into the project’s root directory and install all dependencies:

cd remix-demo-app-applitools
npm install

After we install the necessary dependencies, our app is ready to start:

npm run dev

After we launch the app, it should be available at http://localhost:3000/, unless the port is already taken. With our Remix demo app fully functional, we can transition into testing Remix with Applitools and Cypress.

Visual Testing of Remix App with Applitools and Cypress

There is this great quote from a famous American economist, Richard Thaler: “If you want people to do something, make it easy.” That’s what Applitools and Cypress did by making testing easy for developers, so people don’t see it as a chore anymore.

To run our visual test automation using Applitools, we first need to set up Cypress, which will play the role of test runner. We can think about Cypress as a car’s body, whereas Applitools is an engine that powers the vehicle and ultimately gets us to our destination: a well-tested Remix web app.

Setting up Cypress

Cypress is an open-source JavaScript end-to-end testing framework developers can use to write fast, reliable, and maintainable tests. But rather than reinventing the wheel and talking about the basics of Cypress, we invite our readers to learn more about using this automation framework on the official site, or from this course at Test Automation University.

To install Cypress, we only need to run a single command:

npm install cypress

Then, we need to initialize the cypress folder to write our tests. The easiest way to do it is by running the following:

npx cypress open

This command will open Cypress Studio, which we will cover later in the article, but for now we can safely close it. We also recommend deleting sample test suites that Cypress created for us under cypress/integration.

Note: If npx is missing on the local machine, follow these steps on how to update the Node package manager, or run ./node_modules/.bin/cypress open instead.

Setting up Applitools

Installing the Applitools Eyes SDK with Cypress is a very smooth process. In our case, because we already had Cypress installed, we only need to run the following:

npm install @applitools/eyes-cypress --save-dev

To run Applitools tests, we need to get the Applitools API key, so our test automation can use the Eyes platform, including recording the UI elements, validating any changes on the screen, and more. This page outlines how to get this APPLITOOLS_API_KEY from the platform.

After getting the API key, we have two options on how to add the key to our tests suite: using a CLI or an Applitools configuration file. Later in this post, we explore how to scale Applitools tests, and the configuration file will play a significant role in that effort. Hence, we continue by creating applitools.config.js in our root directory.

Our configuration file will begin with the most basic setup of running a single test thread (testConcurrency) for one browser (browser field). We also need to add our APPLITOOLS_API_KEY under the `apiKey’ field that will look something like this:

module.exports = {
  testConcurrency: 1,
  apiKey: "DONT_SHARE_OUR_APPLITOOLS_API_KEY",
  browser: [
    // Add browsers with different viewports
    { width: 800, height: 600, name: "chrome" },
  ],
  // set batch name to the configuration
  batchName: "Remix Demo App",
};

Now, we are ready to move onto the next stage of writing our visual tests with Applitools and Cypress.

Writing Tests with Applitools and Cypress

One of the best things about Applitools is that it nicely integrates with our existing tests with straightforward API.

For this example, we visually test a simple form on the Actions page of our Remix app.

An Action form in the demo remix app, showing a question: "What is more useful when it is broken?" with an answer field and an answer button.
Action Form in Remix App

To begin writing our tests, we need to create a new file named actions-page.spec.js in the cypress/integration folder:

Basic Applitools Test File

Since we rely on Cypress as our test runner, we will continue using its API for writing the tests. For the basic Actions page tests where we validate that the page renders visually correctly, we start with this code snippet:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    // ...

    // Act
    // ..

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

We continue following the same pattern of Arrange-Act-Assert, but now we also want to ensure that we close all the resources we used while performing the visual testing. To begin our test case, we need to visit the Action page:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    // ..

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

Now, we can begin the visual validation by using the Applitools Eyes framework. We need to “open our eyes,” so-to-speak by calling cy.eyesOpen(). It initializes our test runner for Applitools to capture critical visual elements just like we would with our own eyes:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remix Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    // ..

    // Cleanup
    // ..
  });
});

Note: Technically speaking, cy.eyesOpen() should be a part of the Arrange step of writing the test, but for educational purposes, we are moving it under the Act portion of the test case.

Now, to move to the validation phase, we need Applitools to take a screenshot and match it against the existing version of the same UI elements. These screenshots are saved on our Applitools account, and unless we are running the test case for the first time, the Applitools framework will match these UI elements against the version that we previously saved:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remi Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    cy.eyesCheckWindow("Action Page");

    // Cleanup
    // ..
  });
});

Lastly, we need to close our test runner for Applitools by calling cy.closeEyes(). With this step, we now have a complete Applitools test case for our Actions page:

describe("Actions page form", () => {
  it("Visually confirms action form renders", () => {
    // Arrange
    cy.visit("http://localhost:3000/demos/actions");

    // Act
    cy.eyesOpen({
      appName: "Remi Demo App",
      testName: "Validate Action Form",
    });

    // Assert
    cy.eyesCheckWindow("Action Page");

    // Cleanup
    cy.eyesClose();
  });
});

Note: Although we added a cleanup-stage with cy.eyesClose() in the test case itself, we highly recommend moving this method outside of the it() function into the afterEach() that will run for every test, avoiding code duplication.

Running Applitools Tests

After the hard work of planning and then writing our test suite, we can finally start running our tests. And it couldn’t be easier than with Applitools and Cypress! 

We have two options of either executing our tests by using Cypress CLI or Cypress Studio.

Cypress Studio is a great option when we first write our tests because we can walk through every case, stop the process at any point, or replay any failures. These reasons are why we should use Cypress Studio to demonstrate best how these tests function.

We begin running our cases by invoking the following from the project’s root directory:

npm run cypress-open

This operation opens Cypress Studio, where we can select what test suite to run:

The Cypress Studio dashboard, where we can select actions-page-spec.js
Actions Tests in Cypress Studio

To validate the result, we need to visit our Applitools dashboard:

The Applitools dashboard, displaying the Remix Demo App test with the Action Page.
Basic Visual Test in the Applitools Dashboard

To make it interesting, we can cause this test to fail by changing the text on the Actions page. We could change the heading to say “Failed Actions!” instead of the original “Actions!” and re-run our test. 

This change will cause our original test case to fail because it will catch a difference in the UI (in our case, it’s because of the intentional renaming of the heading). This error message is what we will see in the Cypress Studio:

Cypress Studio showing a red error message that reads, in part: "Eyes-Cypress detected diffs or errors during execution of visual tests."
Failed Visual Test in Cypress Studio

To further deal with this failure, we need to visit the Applitools dashboard:

Applitools dashboard showing the latest test results as "Unresolved."
Failed Visual Test in Applitools Dashboard

As we can see, the latest test run is shown as Unresolved, and we might need to resolve the failure. To see what the difference in the newest test run is, we only need to click on the image in question:

A closer look at the Applitools visual test results, highlighting the areas where the text changed in magenta.
Closer Look at the Failed Test in Applitools Dashboard

A great thing about Applitools is that their visual AI algorithm is so advanced that it can test our application on different levels to detect content changes as well as layout or color updates. What’s especially important is that Applitools’ algorithm prevents false positives with built-in functionalities like ignoring content changes for apps with dynamic content. 

In our case, the test correctly shows that the heading changed, and it’s now up to us to either accept the new UI or reject it and call this failure a legitimate bug. Applitools makes it easy to choose the correct course of action as we only need to press thumbs up to accept the test result or thumbs down to decline it.

Accepting or Rejecting Test Run in Applitools Dashboard

In our case, the test case failed due to a visual bug that we introduced by “unintentionally” updating the heading. 

After finishing our work in the Applitools Dashboard, we can bring the test results back to the developers and file a bug on whoever made the UI change.

But are we done? What about testing our web app on different browsers and devices? Fortunately, Applitools has a solution to quickly scale the tests automation and add cross-browser coverage.

Scaling Visual Tests Across Browsers

Testing an application against one browser is great, but what about all others? We have checked our Remix app on Chrome, but we didn’t see how the app performs on Firefox, Microsoft Edge, and so on. We haven’t even started looking into mobile platforms and our web app on Android or iOS. Introducing this additional test coverage can get out of hand quickly, but not with Applitools and their Ultrafast Test Cloud. It’s just one configuration change away!

With this cloud solution from Applitools, we can test our app across different browsers without any additional code. We only have to update our Applitools configuration file, applitools.config.js.

Below is an example of how to add coverage for desktop browsers like Chrome, Firefox, Safari and E11, plus two extra test cases for different models of mobile phones:

module.exports = {
  testConcurrency: 1,
  apiKey: "DONT_SHARE_YOUR_APPLITOOLS_API_KEY",
  browser: [
    // Add browsers with different viewports
    { width: 800, height: 600, name: "chrome" },
    { width: 700, height: 500, name: "firefox" },
    { width: 1600, height: 1200, name: "ie11" },
    { width: 800, height: 600, name: "safari" },
    // Add mobile emulation devices in Portrait or Landscape mode
    { deviceName: "iPhone X", screenOrientation: "landscape" },
    { deviceName: "Pixel 2", screenOrientation: "portrait" },
  ],
  // set batch name to the configuration
  batchName: "Remix Demo App",
};

It’s important to note that when specifying the configuration for different browsers, we need to define their width and height, with an additional property for screenOrientation to cover non-desktop devices. These settings are critical for testing responsive apps because many modern websites visually differ depending on the devices our customers use.

After updating the configuration file, we need to re-run our test suite with npm test. Fortunately, with the Applitools Ultrafast Test Cloud, it only takes a few seconds to finish running our tests on all browsers, so we can visit our Applitools Dashboard to view the results right away:

The Applitools dashboard, showing passed tests for our desired suite of browsers.
Cross-browser Coverage with Applitools

The Applitools dashboard, showing a visual checkpoint with a Pixel 2 mobile phone in portrait orientation.
Mobile Coverage with Applitools

As we can see, with only a few lines in the configuration file, we scaled our visual tests across multiple devices and browsers. We save ourselves time and money whenever we can get extra test coverage without explicitly writing new cases. Maintaining test automation that we write is one of the most resource-consuming steps of the Software Development Life Cycle. With solutions like Applitools Ultrafast Test Cloud, we can write fewer tests while increasing our test coverage for the entire app.

Verdict: Can Remix Apps be Visually Tested with Applitools and Cypress?

Hopefully, this article showed that the answer is yes; we can successfully visually test Remix-based apps with Applitools and Cypress! 

Remix is a fantastic framework to take User Experience to the next level, and we invite you to learn more about it during the webinar by Kent C. Dodds “Building Excellent User Experiences with Remix”.

For more information about Applitools, visit their website, blog and YouTube channel. They also provide free courses through Test Automation University that can help take anyone’s testing skills to the next level.

The post How to Visually Test a Remix App with Applitools and Cypress appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser https://app14743.cloudwayssites.com/blog/lightning-fast-playwright-tests-cross-browser/ Tue, 15 Mar 2022 19:31:23 +0000 https://app14743.cloudwayssites.com/?p=35443 Learn how you can run cross browser tests against any stock browser using Playwright – not just the browser projects like Chromium, Firefox, and WebKit, and not just Chrome and Edge.

The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Learn how you can run cross browser tests against any stock browser using Playwright – not just the browser projects like Chromium, Firefox, and WebKit, and not just Chrome and Edge.

These days, there are a plethora of great web test automation tools. Although Selenium WebDriver seems to retain its top spot in popularity, alternatives like Playwright are quickly growing their market share. Playwright is an open source test framework developed by Microsoft by the same folks who worked on Puppeteer. It is notable for its concise syntax, execution speed, and advanced features. Things like automatic waiting and carefully-designed assertions protect tests against flakiness. And like Selenium, Playwright has bindings for multiple languages: TypeScript, JavaScript, Python, .NET, and Java.

However, Playwright has one oddity that sets it apart from other frameworks: Instead of testing browser applications, Playwright tests browser projects. What does this mean? Major modern browser applications like Chrome, Edge, and Safari are based on browser projects that they use internally as their bases. For example, Google Chrome is based on the Chromium project. Typically, these internal projects are open source and provide a rendering engine for web pages.

The table below shows the browser projects used by major browser apps:

Browser projectBrowser app
ChromiumGoogle Chrome, Microsoft Edge, Opera
Firefox (Gecko)Mozilla Firefox
WebKitApple Safari

Browser projects offer Playwright unique advantages. Setup is super easy, and tests are faster using browser contexts. However, some folks need to test full browser applications, not just browser projects. Some teams are required to test specific configurations for compliance or regulations. Other teams may feel like testing projects instead of “stock” browsers is too risky. Playwright can run tests directly against Google Chrome and Microsoft Edge with a little extra configuration, but it can’t hit Firefox, Safari, or IE, and in my anecdotal experience, tests against Chrome and Edge run many times slower than the same tests against Chromium. Playwright’s focus on browser projects over browser apps is a double-edged sword: While it arguably helps most testers, it inherently precludes others.

Thankfully, there is a way to run Playwright tests against full browser apps, not just browser projects: using Applitools Visual AI with the Ultrafast Test Cloud. With the help of Applitools, you can achieve true cross-browser testing with Playwright at lightning speed, even for large test suites. Let’s see how it’s done. We’ll start with a basic Playwright test JavaScript, and then we’ll add visual snapshots that can be rendered using any browser in the Applitools cloud.

Defining a Test Case

Let’s define a basic web app login test for the Applitools demo site. The site mimics a basic banking app. The first page is a login screen:

A demo login form, with a username field, password field, and a few other selectable items.

You can enter any username or password to login. Then, the main page appears:

A main page for our demo banking app, showing total balance, amount due today, recent transactions and more.

Nothing fancy here. The steps for our test case are straightforward:

Scenario: Successful login
  Given the login page is displayed
  When the user enters their username and password
  And the user clicks the login button
  Then the main page is displayed

These steps would be the same for the login behavior of any other application.

Automating a Playwright Test

Let’s automate our login test in JavaScript using Playwright. We could automate our test in TypeScript (which is arguably better), but I’ll use JavaScript for this example to keep the code plain and simple.

Create a new project, and install Playwright. Under the tests folder, create a new file named login.spec.js, and add the following test stub:

const { test, expect } = require('@playwright/test');

test.describe.configure({ mode: 'parallel' })

test.describe('Login', () => {

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});
   });

   test('should log into the demo app', async ({ page }) => {
      
       // Load login page
       // ...

       // Verify login page
       // ...
      
       // Perform login
       // ...

       // Verify main page
       // ...
   });
})

Playwright uses a Mocha-like structure for test cases. The test.beforeEach(...) call sets an explicit viewport size for testing to make sure the responsive layout renders as expected. The test(...) call includes sections for each step.

Let’s implement the steps using Playwright calls. Here’s the first step to load the login page:

       // Load login page
       await page.goto('https://demo.applitools.com');

The second step verifies that elements like username and password fields appear on the login page. Playwright’s assertions automatically wait for the elements to appear:

// Verify login page
       await expect(page.locator('div.logo-w')).toBeVisible();
       await expect(page.locator('id=username')).toBeVisible();
       await expect(page.locator('id=password')).toBeVisible();
       await expect(page.locator('id=log-in')).toBeVisible();
       await expect(page.locator('input.form-check-input')).toBeVisible();

The third step actually logs into the site like a human user:

       // Perform login
       await page.fill('id=username', 'andy')
       await page.fill('id=password', 'i<3pandas')
       await page.click('id=log-in')

The fourth and final step makes sure the main page loads correctly. Again, assertions automatically wait for elements to appear:

       // Verify main page
      
       //   Check various page elements
       await expect.soft(page.locator('div.logo-w')).toBeVisible();
       await expect.soft(page.locator('ul.main-menu')).toBeVisible();
       await expect.soft(page.locator('div.avatar-w img')).toHaveCount(2);
       await expect.soft(page.locator('text=Add Account')).toBeVisible();
       await expect.soft(page.locator('text=Make Payment')).toBeVisible();
       await expect.soft(page.locator('text=View Statement')).toBeVisible();
       await expect.soft(page.locator('text=Request Increase')).toBeVisible();
       await expect.soft(page.locator('text=Pay Now')).toBeVisible();
       await expect.soft(page.locator(
           'div.element-search.autosuggest-search-activator > input'
       )).toBeVisible();

       //    Check time message
       await expect.soft(page.locator('id=time')).toContainText(
           /Your nearest branch closes in:( \d+[hms])+/);

       //    Check menu element names
       await expect.soft(page.locator('ul.main-menu li span')).toHaveText([
           'Card types',
           'Credit cards',
           'Debit cards',
           'Lending',
           'Loans',
           'Mortgages'
       ]);

       //    Check transaction statuses
       let statuses =
           await page.locator('span.status-pill + span').allTextContents();
       statuses.forEach(item => {
           expect.soft(['Complete', 'Pending', 'Declined']).toContain(item);
       });

The first three steps are nice and concise, but the code for the fourth step is quite long. Despite making several assertions for various page elements, there are still things left unchecked!

Run the test locally to make sure it works:

$ npx playwright test

This command will run the test against all three Playwright browsers – Chromium, Firefox, and WebKit – in headless mode and in parallel. You can append the “--headed” option to see the browsers open and render the pages. The tests should take only a few short seconds to complete, and they should all pass.

Introducing Visual Snapshots

You could run this login test on your local machine or from your Continuous Integration (CI) service, but in its present form, it can’t run against certain “stock” browsers like Apple Safari or Internet Explorer. If you attempt to use a browser channel to test stock Chrome or Edge browsers, tests would probably run much slower compared to Chromium. To run against any browser at lightning speed, we need the help of visual testing techniques using Applitools Visual AI and the Ultrafast Test Cloud.

Visual testing is the practice of inspecting visual differences between snapshots of screens in the app you are testing. You start by capturing a “baseline” snapshot of, say, the login page to consider as “right” or “expected.” Then, every time you run the tests, you capture a new snapshot of the same page and compare it to the baseline. By comparing the two snapshots side-by-side, you can detect any visual differences. Did a button go missing? Did the layout shift to the left? Did the colors change? If nothing changes, then the test passes. However, if there are changes, a human tester should review the differences to decide if the change is good or bad.

Manual testers have done visual testing since the dawn of computer screens. Applitools Visual AI simply automates the process. It highlights differences in side-by-side snapshots so you don’t miss them. Furthermore, Visual AI focuses on meaningful changes that human eyes would notice. If an element shifts one pixel to the right, that’s not a problem. Visual AI won’t bother you with that noise.

If a picture is worth a thousand words, then a visual snapshot is worth a thousand assertions. We could update our login test to take visual snapshots using Applitools Eyes SDK in place of lengthy assertions. Visual snapshots provide stronger coverage than the previous assertions. Remember how our login test made several checks but still didn’t cover all the elements on the page? A visual snapshot would implicitly capture everything with only one line of code. Visual testing like this enables more effective functional testing than traditional assertions.

But back to the original problem: how does this enable us to run Playwright tests against any stock browser? That’s the magic of snapshots. Notice how I said “snapshot” and not “screenshot.” A screenshot is merely a grid of static pixels. A snapshot, however, captures full page content – HTML, CSS, and JavaScript – that can be re-rendered in any browser configuration. If we update our Playwright test to take visual snapshots of the login page and the main page, then we could run our test one time locally to capture the snapshots, Then, the Applitools Eyes SDK would upload the snapshots to the Applitools Ultrafast Test Cloud to render them in any target browser – including browsers not natively supported by Playwright – and compare them against baselines. All the heavy work for visual checkpoints would be done by the Applitools Ultrafast Test Cloud, not by the local machine. It also works fast, since re-rendering snapshots takes much less time than re-running full tests.

Updating the Playwright Test

Let’s turn our login test into a visual test. First, make sure you have an Applitools account. You can register for a free account to get started.

Next, install the Applitools Eyes SDK for Playwright into your project:

$ npm install -D @applitools/eyes-playwright

Add the following import statement to login.spec.js:

const {
   VisualGridRunner,
   Eyes,
   Configuration,
   BatchInfo,
   BrowserType,
   DeviceName,
   ScreenOrientation,
   Target,
   MatchLevel
} = require('@applitools/eyes-playwright');

Next, we need to specify which browser configurations to run in Applitools Ultrafast Grid. Update the test.beforeEach(...) call to look like this:

test.describe('Login', () => {
   let eyes, runner;

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});

       runner = new VisualGridRunner({ testConcurrency: 5 });
       eyes = new Eyes(runner);
  
       const configuration = new Configuration();
       configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
  
       configuration.addBrowser(800, 600, BrowserType.CHROME);
       configuration.addBrowser(700, 500, BrowserType.FIREFOX);
       configuration.addBrowser(1600, 1200, BrowserType.IE_11);
       configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
       configuration.addBrowser(800, 600, BrowserType.SAFARI);
  
       configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
  
       eyes.setConfiguration(configuration);
   });
})

That’s a lot of new code! Let’s break it down:

  1. The page.setViewportSize(...) call remains unchanged. It will set the viewport only for the local test run.
  2. The runner object points visual tests to the Ultrafast Grid.
  3. The testConcurrency setting controls how many visual tests will run in parallel in the Ultrafast Grid. A higher concurrency means shorter overall execution time. (Warning: if you have a free account, your concurrency limit will be 1.)
  4. The eyes object watches the browser for taking visual snapshots.
  5. The configuration object sets the test batch name and the various browser configurations to test in the Ultrafast Grid.

This configuration will run our visual login test against 10 different browsers: 5 desktop browsers of various viewports, and 5 mobile browsers of various orientations.

Time to update the test case. We must “open” Applitools Eyes at the beginning of the test to capture screenshots, and we must “close” Eyes at the end:

test('should log into the demo app', async ({ page }) => {
      
       // Open Applitools Eyes
       await eyes.open(page, 'Applitools Demo App', 'Login');

       // Test steps
       // ...

       // Close Applitools Eyes
       await eyes.close(false)
   });

The load and login steps do not need any changes because the interactions are the same. However, the “verify” steps reduce drastically to one-line snapshot calls:

   test('should log into the demo app', async ({ page }) => {
      
       // ...

       // Verify login page
       await eyes.check('Login page', Target.window().fully());
      
       // ...
      
       // Verify main page
       await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());

       // ...
      
   });

These snapshots capture the full window for both pages. The main page also sets a match level to “layout” so that differences in text and color are ignored. Snapshots will be captured once locally and uploaded to the Ultrafast Grid to be rendered on each target browser. Bye bye, long and complicated assertions!

Finally, after each test, we should add safety handling and result dumping:

   test.afterEach(async () => {
       await eyes.abort();

       const results = await runner.getAllTestResults(false);
       console.log('Visual test results', results);
   });

The completed code for login.spec.js should look like this:

const { test } = require('@playwright/test');
const {
   VisualGridRunner,
   Eyes,
   Configuration,
   BatchInfo,
   BrowserType,
   DeviceName,
   ScreenOrientation,
   Target,
   MatchLevel
} = require('@applitools/eyes-playwright');


test.describe.configure({ mode: 'parallel' })

test.describe('A visual test', () => {
   let eyes, runner;

   test.beforeEach(async ({ page }) => {
       await page.setViewportSize({width: 1600, height: 1200});

       runner = new VisualGridRunner({ testConcurrency: 5 });
       eyes = new Eyes(runner);
  
       const configuration = new Configuration();
       configuration.setBatch(new BatchInfo('Modern Cross Browser Testing Workshop'));
  
       configuration.addBrowser(800, 600, BrowserType.CHROME);
       configuration.addBrowser(700, 500, BrowserType.FIREFOX);
       configuration.addBrowser(1600, 1200, BrowserType.IE_11);
       configuration.addBrowser(1024, 768, BrowserType.EDGE_CHROMIUM);
       configuration.addBrowser(800, 600, BrowserType.SAFARI);
  
       configuration.addDeviceEmulation(DeviceName.iPhone_X, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Pixel_2, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Galaxy_S5, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.Nexus_10, ScreenOrientation.PORTRAIT);
       configuration.addDeviceEmulation(DeviceName.iPad_Pro, ScreenOrientation.LANDSCAPE);
  
       eyes.setConfiguration(configuration);
   });

   test('should log into the demo app', async ({ page }) => {
      
       // Open Applitools Eyes
       await eyes.open(page, 'Applitools Demo App', 'Login');

       // Load login page
       await page.goto('https://demo.applitools.com');

       // Verify login page
       await eyes.check('Login page', Target.window().fully());
      
       // Perform login
       await page.fill('id=username', 'andy')
       await page.fill('id=password', 'i<3pandas')
       await page.click('id=log-in')

       // Verify main page
       await eyes.check('Main page', Target.window().matchLevel(MatchLevel.Layout).fully());

       // Close Applitools Eyes
       await eyes.close(false)
   });

   test.afterEach(async () => {
       await eyes.abort();

       const results = await runner.getAllTestResults(false);
       console.log('Visual test results', results);
   });
})

Now, it’s a visual test! Let’s run it.

Running the Test

Your account comes with an API key. Visual tests using Applitools Eyes need this API key for uploading results to your account. On your machine, set this key as an environment variable.

On Linux and macOS:

$ export APPLITOOLS_API_KEY=<value>

On Windows:

> set APPLITOOLS_API_KEY=<value>

Then, launch the test using only one browser locally:

$ npx playwright test —-browser=chromium

(Warning: If your playwright.config.js file has projects configured, you will need to use the “--project” option instead of the “--browser” option. Playwright may automatically configure this if you run npm init playwright to set up the project.)

When this test runs, it will upload snapshots for both the login page and the main page to the Applitools test cloud. It needs to run only one time locally to capture the snapshots. That’s why we set the command to run using only Chromium.

Open the Applitools dashboard to view the visual results:

The Applitools dashboard, displaying the results of our new visual tests, each marked with a Status of 'New'.

Notice how this one login test has one result for each target configuration. All results have “New” status because they are establishing baselines. Also, notice how little time it took to run this batch of tests:

The listed batch duration for all 10 tests, with a total of 20 steps, is 36 seconds.

Running our test across 10 different browser configurations with 2 visual checkpoints each at a concurrency level of 5 took only 36 seconds to complete. That’s ultra fast! Running that many test iterations with a Selenium Grid or similar scale-out platform could take several minutes.

Run the test again. The second run should succeed just like the first. However, the new dashboard results now say “Passed” because Applitools compared the latest snapshots to the baselines and verified that they had not changed:

The Applitools dashboard, displaying the results of our visual tests, each marked with a Status of 'Passed'.

This time, all variations took 32 seconds to complete – about half a minute.

Passing tests are great, but what happens if a page changes? Consider an alternate version of the login page:

A demo login form, with a username field, password field, and a few other selectable items. This time there is a broken image and changed login button.

This version has a broken icon and a different login button. Modify the Playwright call to load the login page to test this version of the site like this:

       await page.goto('https://demo.applitools.com/index_v2.html');

Now, when you rerun the test, results appear as “Unresolved” in the Applitools dashboard:

The Applitools dashboard, displaying the results of our latest visual tests, each marked with a Status of 'Unresolved'.

When you open each result, the dashboard will display visual comparisons for each snapshot. If you click the snapshot, it opens the comparison window:

A comparison window showing the baseline and the new visual checkpoint, with the changes highlighted in magenta by Visual AI.

The baseline snapshot appears on the left, while the latest checkpoint snapshot appears on the right. Differences will be highlighted in magenta. As the tester, you can choose to either accept the change as a new baseline or reject it as a failure.

Taking the Next Steps

Playwright truly is a nifty framework. Thanks to the Applitools Ultrafast Grid, you can upgrade any Playwright test with visual snapshots and run them against any browsers, even ones not natively supported by Playwright. Applitools enables Playwright tests to become cross-browser tests. Just note that this style of testing focuses on cross-browser page rendering, not cross-browser page interactions. You may still want to run your Playwright tests locally against Firefox and WebKit in addition to Chromium, while using the Applitools Ultrafast Grid to validate rendering on different browser and viewport configurations.

Want to see the full code? Check out this GitHub repository: applitools/workshop-cbt-playwright-js.

Want to try visual testing for yourself? Register for a free Applitools account.

Want to see how to do this type of cross-browser testing with Cypress? Check out this article.

The post Running Lightning-Fast Cross-Browser Playwright Tests Against any Browser appeared first on AI-Powered End-to-End Testing | Applitools.

]]>