Product Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/category/product/ Applitools delivers full end-to-end test automation with AI infused at every step. Thu, 19 Mar 2026 20:19:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies https://app14743.cloudwayssites.com/blog/playwright-visual-testing-strategy/ Thu, 19 Mar 2026 20:19:13 +0000 https://app14743.cloudwayssites.com/?p=62370 Visual testing in Playwright often forces teams to choose between strict failures, snapshot maintenance, and CI pipeline complexity. This article explores how a single configuration flag introduces three different strategies for handling visual differences and improving the Playwright developer experience.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Hello everyone! I’m Noam, an SDK developer on the Applitools JS-SDKs team. While my day-to-day focus is on core engineering, I work closely with our field teams and occasionally join technical deep-dive sessions with customers.

In these conversations, we frequently encounter questions about performance and the engineering philosophy behind our integration. Specifically, there is often curiosity about how to make visual testing feel more “Playwright-native” and natural to developers.

In this post, I’ll share the design logic behind these architectural choices so you can apply these patterns in your own CI pipelines in a way that fits your organization’s needs.

Adding unresolved to Playwright

Integrating visual regression testing into Playwright requires combining two different status models: Playwright’s binary Pass/Fail and the visual testing concept of unresolved.

In visual testing, instead of having two (passed and failed) states, there’s an additional third state: unresolved. This state indicates a difference was detected, but a human decision is required to determine if it is a bug or a valid change that should be approved as a new baseline.

​Playwright doesn’t support this third state out of the box. Visual test maintenance using Playwright’s native toHaveScreenshot API forces the developer into a cumbersome cycle requiring three separate test executions:

  1. First, the developer needs to run to see the failure.
  2. Then, they need to run with the --update-snapshots flag to create new baseline images.
  3. Then, most developers would run again to validate that everything works with the updated baseline as expected—which isn’t always the case, because the Playwright native comparison method (pixelmatch) tends to be very flaky, unlike Visual AI.

​After this local cycle, the developer must commit the new baseline images to the repository—bloating the git history—and wait for a new CI execution to provide final feedback. For dev-centered organizations that focus on feedback loop velocity, this workflow is… suboptimal. Personally, I believe that’s one of the reasons visual testing isn’t as popular as it should be among Playwright users.

​When we engineered the Applitools fixture, one of our goals was to support this Unresolved state natively, without disrupting Playwright’s core lifecycle—specifically its Worker Processes and Retry mechanisms.

The solution rests on two key engineering decisions: moving rendering to the background (async architecture) and giving developers control over the exit signal and performance tradeoffs (failTestsOnDiff).

We don’t block test execution when Applitools is rendering

The core value of visual testing lies in AI-based comparison to eliminate false positives and multi-platform rendering.

Architecturally, these processes are cloud-native services.

  • AI-as-a-Service: Just like massive LLMs or other generative models, the Visual AI engine runs on specialized cloud infrastructure optimized for heavy inference. It cannot simply be “installed” on a lightweight CI agent.
  • Platform Constraints: Authentic cross-platform rendering (e.g., iOS Safari on a Linux CI agent) is physically impossible on a single local machine.

Since these operations inherently occur remotely, performing them synchronously would force the local test runner to idle while waiting for network round-trips and cloud processing.

To solve this, we designed the fixture around an asynchronous architecture:

  • Instant Capture: When eyes.check() is called, we synchronously capture the DOM and CSS resources (instead of a rasterized image). This operation is extremely fast.
  • Immediate Release: We purposefully use soft assertions by design. We release the Playwright test thread immediately so the functional logic can proceed to the next step or test case without blocking.
  • Background Heavy Lifting: The heavy work—uploading assets, rendering across different browsers and operating systems, and performing the AI comparison in the Applitools cloud—starts immediately in the background, managed by the Worker process.

The “Draining Queue” Effect

​This architecture explains why the Playwright Worker sometimes remains active after the final test completes.

​The background tasks are limited only by your account’s concurrency settings, and the screenshot size. For example, when rendering a 10,000 px page on a small mobile device, the rendering infrastructure might need time for scrolling and stitching. If your functional tests execute faster than the background workers can process the queue (rendering & comparing), the Worker process stays alive at the end solely to “drain the queue” and ensure data integrity.

While it does ensure your test logic runs at maximum speed, offloading the processing cost to the background, this experience might cause friction and frustration as the developers see that workers are “hanging” after tests are completed. When facing such issues, our support team is here to advise and assist with various solutions—we can investigate execution logs and if needed even make custom suggestions to tailor Eyes-Playwright to your needs.

Solving the Matrix Problem

​Standard Playwright documentation recommends defining multiple projects in playwright.config.ts to cover different browsers (Chromium, Firefox, WebKit) and various viewport sizes.

​While this ensures coverage, it introduces a linear performance penalty (O(N)). To test three browsers across two viewports, your CI must execute the functional logic (clicks, waits, navigation) six times. It’s 6x more load on the CI machine and the testing environment.

​We recommend shifting this workload to the Ultrafast Grid (UFG).

​In this mode, you execute the Playwright test once, typically on Chromium. We upload the DOM state, and our cloud infrastructure renders that state across all configured browsers and viewports in parallel.

This transforms an O(N) execution problem into an O(1) execution problem, significantly shortening the feedback loop.

The Strategy: failTestsOnDiff

​Since the actual comparison happens asynchronously and potentially completes after the test logic finishes, we need a mechanism to map the visual result back to the Playwright status.

​This is controlled by the failTestsOnDiff flag. It’s not just a boolean; it’s a strategic choice for your CI pipeline.

  • The Logic: This is the configuration our own Front-End team uses. We believe that Visual Change Test Failure.
  • Behavior: The Playwright test passes (Green). The unresolved status is reported externally via our SCM integration (GitHub/GitLab).
  • Why: Retrying a visual test is computationally wasteful—the pixels won’t change on the second run. By keeping the test “Green,” we avoid triggering Playwright’s retry mechanism. The decision is moved to the Pull Request, where it belongs.

Read more about SCM integration or hop directly to our GitHub, Bitbucket, Gitlab or Azure Devops articles.

  • The Logic: You need a “Red” pipeline to block deployment, but you want to avoid the noise of retries and gain a significant performance improvement.
  • Behavior: Individual tests pass, but the Worker Process exits with a failure code if any diffs were found in the suite.
  • Why: This provides a hard gatekeeper for the build status. It allows the Eyes rendering farms to continue processing visual test results in the background without blocking the execution thread, allowing the worker to move on to handle more tests efficiently.
  • The Logic: Immediate feedback loop.
  • Behavior: Fails the test immediately in the afterEach hook.
  • Why: Best for local development where you want to see the failure immediately in the console. It is also useful if you use the trace: retainOnFailure setting in Playwright, as it ensures traces are preserved for unresolved visual assertions. Not recommended for CI due to the retry loops described above.

TL;DR – When to use each setting

Mode afterEach afterAll false
Performance Less performant
The Playwright worker will wait after each test for all renders to be completed and for the visual AI to compare the results
Best performance
The Playwright workers will collect the resources and manage the rendering and Visual AI comparisons in the background
Best performance
Similar to afterAll
Observability Best
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as failing
Good
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as passing. You will get a failure of the worker process, and other reporters won’t link it to a specific test case.
Great in pull request (If SCM integration is enabled).
The Applitools reporter will reflect the tests perfectly. Other reporters will consider unresolved tests as passing.
Best fit Local testing Local testing AND
CI environments without SCM integration
CI environments with SCM integration

Closing the Visibility Gap: The Custom Reporter

​If you adopt Strategy A (false) or Strategy B (afterAll), you introduce a secondary challenge: Visibility.

Since Playwright technically marks these tests as Passed to avoid retries, the standard Playwright HTML Report will show them as “Green,” potentially masking unresolved visual differences that require attention.

​To bridge this gap without forcing developers to switch context, we developed a Custom Applitools Reporter.

​This reporter extends the standard Playwright HTML report. It injects the actual visual status (Passed, Failed, or unresolved) directly into the test results view.

  • True Status: You see which tests have visual diffs, even if the Playwright exit code was successful.
  • Direct Links: It provides a direct link from the test report to the specific batch results in the Applitools Dashboard.
  • Context: It enriches the report with UFG render status and batch information.

​This ensures you get the best of both worlds: The optimization of a “Green” CI run (no retries), with the transparency of a report that highlights exactly where manual review is needed.

Summary

​The Applitools Playwright fixture is designed to be non-blocking and scalable. By leveraging asynchronous architecture and Applitools UltraFast Grid, we offload the heavy lifting from your CI. By correctly configuring failTestsOnDiff, you ensure that your pipeline reflects your team’s engineering culture—whether that’s strict gating or modern, PR-based visual review.

Quick Answers

What is visual regression testing in Playwright

Visual regression testing in Playwright verifies that changes to an application’s UI do not introduce unintended visual differences. Playwright can perform basic visual regression checks using screenshot comparisons like toHaveScreenshot, while dedicated visual testing tools (such as Applitools Eyes) extend this by detecting meaningful UI changes, managing baselines, and enabling review workflows for approving visual updates.

What is the best way to do visual testing in Playwright?

Playwright supports basic visual testing through screenshot comparisons such as toHaveScreenshot, but this approach can become difficult to maintain at scale. Dedicated visual testing tools, like Applitools Eyes, extend Playwright by adding Visual AI comparison, cross-browser rendering, and review workflows that allow teams to detect visual regressions without maintaining large sets of screenshot baselines.

How does Playwright screenshot testing (toHaveScreenshot) compare to visual regression testing tools?

Playwright’s toHaveScreenshot performs pixel-by-pixel image comparisons against stored baseline images. While this works for simple cases, it often requires updating and maintaining many snapshots. Visual regression testing tools like Applitools Eyes use Visual AI to detect meaningful UI changes while ignoring insignificant rendering differences, provide review workflows to approve or reject visual changes, and allows custom match levels for different regions of the screen.

Can Playwright run visual tests across multiple browsers and devices

Yes, but with a limited scope. Natively, Playwright supports three browser engines (Chromium, Firefox, and WebKit), but it does not execute tests across different real operating systems or mobile devices. This lack of OS-level rendering limits coverage and imposes a risk of missing platform-specific visual bugs. For example, see how a frontend team caught a visual bug specific to Mac Retina screens that a standard engine check would miss.

How can you run cross-browser visual tests in Playwright without running tests multiple times?

Normally, cross-browser testing requires executing the same tests separately for each browser configuration. Tools like Applitools Ultrafast Grid allow tests to run once while visual rendering is executed across multiple browsers and viewport combinations in parallel. This removes the need to multiply test execution across the full browser matrix.

Why is cross-browser testing in Playwright so slow?

Natively, cross-browser testing introduces a significant performance penalty. Playwright must execute the entire test logic (clicks, waits, network requests) separately for every browser and viewport configuration. Modern visual testing tools (e.g., Applitools Ultrafast Grid) eliminate this overhead by executing the test logic just once locally, performing the cross-browser rendering and visual comparison in parallel in the cloud.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Test Your Components Where You Build with the Applitools Storybook Addon https://app14743.cloudwayssites.com/blog/test-your-components-where-you-build-with-the-applitools-storybook-addon/ Fri, 17 Oct 2025 11:07:00 +0000 https://app14743.cloudwayssites.com/?p=61542 Test Storybook components with Visual AI inside Storybook. Catch UI bugs early, bulk-maintain baselines, and scale cross-browser coverage.

The post Test Your Components Where You Build with the Applitools Storybook Addon appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Local dev is where most UI changes happen (and where regressions sneak in). States drift, styles diverge, and tiny tweaks pile up until something breaks in CI. The Applitools Storybook Addon brings AI-powered visual testing straight into Storybook so you can catch issues as you code, approve the good changes quickly, and keep your CI/CD pipelines green.

AI-Powered Visual Testing Inside Storybook

Open your Storybook and run visual tests from an Applitools Eyes tab – no context switching. Results are grouped by component to mirror your Storybook structure, and a reporter widget highlights what needs attention first so you can review diffs in minutes, not hours. Learn more on our Storybook Component Testing with Applitools page.

  • Catch bugs where you build. Validate component states during local development and avoid surprises later.
  • Review faster with Visual AI. See only meaningful, human-perceptible UI changes without pixel-to-pixel noise. Tune sensitivity with AI match levels when you need to.
  • Scale coverage painlessly. Run once; render everywhere with Ultrafast Grid across browsers, devices, and viewports in parallel.

How to Use the Applitools Eyes Storybook Addon

Getting started takes just a couple of minutes.

  1. Install the SDK & Addon
    Add Applitools Eyes to your project and enable the Storybook addon (React, Vue, Angular supported). See the installation instructions in the Eyes Storybook Addon docs.
  2. Run Applitools Visual Tests in Storybook
    Open Storybook, switch to the Applitools Eyes tab, and trigger tests for a single story or an entire component. Results stream back in real time with automatic grouping by component.
  3. Review & Maintain
    Use Visual AI diffs, side-by-side views, and auto-maintenance to approve or reject changes in bulk. Prioritized sorting surfaces what needs attention first.
  4. Scale Across Browsers/Devices
    Turn on Ultrafast Grid to parallelize renders across Chrome, Firefox, Safari, Edge, and mobile sizes – without extra local setup.

Applitools Storybook Addon Use Case Playbook

Below are the three most common ways teams use the Eyes Storybook Addon each – with a quick, practical flow pulled right from the product.

Use Case: Guard Your Design System

As you refactor tokens or update themes, run visual tests on every component state. Spot unintended changes across the library instantly.

How to do it in Storybook

  1. Start Storybook and open your design‑system component in the Applitools Eyes tab.
  2. Click Run from the tab (or use Run in the left sidebar test module). The addon tests the stories and streams results inline for every browser/device in your applitools.config.js.
  3. In the sidebar, filter by Unresolved to zero in on changes across the library (Green = passed, Orange = unresolved, Red = failed).
  4. Open a story’s result and use Side‑by‑Side or the Slider to spot subtle spacing/typography diffs.
  5. Approve legit theme updates with Thumbs Up (or use ⋯ → Review actions to approve the whole story/batch). Reject regressions with Thumbs Down and fix.

Pro tip: Use the tab ⋯ → Configuration to confirm you’re validating the right browser matrix and server URL. See more options in the docs.

Use Case: Fix Fast During Local Dev

Working on a feature branch? Validate your component in Storybook before you commit.

How to do it in Storybook

  1. Open your feature’s stories, then hit Run in the Applitools tab for the component you’re touching.
  2. Watch statuses update inline; click the status buttons to filter to Unresolved so you only look at what changed.
  3. Click into any row to open compare tools: Diff Image, Actual Image, Expected Image, Side‑by‑Side, or Slider.
  4. If the change is intended, Thumbs Up to approve; otherwise Thumbs Down to flag and keep iterating.
  5. When you’re happy locally, push your branch. You can scale the same setup in CI using your existing Storybook build/preview URL.

Heads‑up: To view baselines or approve/reject, sign in to your Applitools account in the same browser that’s running Storybook (you’ll be prompted if not).

Use Case: Ship Multi‑Browser Confidence

One click, many targets. Validate layout and responsive behavior across browsers and viewports – early.

How to do it in Storybook

  1. In ⋯ → Configuration, verify your browsers/devices list (Chrome, Firefox, Safari, Edge; add viewports you care about).
  2. Hit Run for representative stories (states, theming, interactive). Results come back grouped by each browser/device so differences are obvious.
  3. Filter the sidebar by Unresolved and scan. Use Side‑by‑Side or Slider to compare layout at different sizes.
  4. Approve good changes in bulk (⋯ → Review actions) to keep maintenance low.
  5. For broader coverage, run the same setup in CI and expand the matrix.

Why Visual AI > Pixel Diffs for Storybook

Pixel-to-pixel tools are fragile with dynamic content and minor rendering differences. Applitools Visual AI mimics human vision to highlight only meaningful UI changes (structure, layout, content) while ignoring the noise. You can still dial sensitivity up or down with match levels whenever needed. Less flake, more signal.

Try AI-Powered Visual Testing in Storybook Today

Run your first component tests in minutes, review diffs right in Storybook, and expand coverage with Ultrafast Grid – without slowing delivery.

Frequently Asked Questions

What does the Applitools Storybook Addon do?

It runs Applitools visual tests from inside Storybook. You can trigger tests per story or component, then review results and diffs inline with automatic grouping that mirrors your Storybook tree.

Do I need to write tests with the Applitools Storybook Addon?

With the Applitools Storybook Addon, you existing stories become the tests.

How is the Applitools Storybook Addon different from Chromatic visual tests?

Applitools’ Visual AI detects signficant visual differences instead of only pixel-to-pixel comparisons. This means you see fewer false positives and spend less time on maintenance.

Appitools also lets you auto-maintain hundreds of tests at once (when you do need to perform test maintenance), run them across multiple browsers and devices instantly, and manage everything in the same platform that’s also running your Playwright and Cypress end-to-end test flows. See our Applitools vs. Chromatic comparison page for a deeper breakdown.

What about performance and CI stability?

Validate locally in Storybook to prevent CI failures. When you’re ready, run the same tests in CI and render broadly with Ultrafast Grid – fast and consistent.

Do I need an Applitools account to use the Storybook Addon?

Yes. You’ll need an active Applitools Eyes account and an API key to use the Applitools Storybook Addon.

The post Test Your Components Where You Build with the Applitools Storybook Addon appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin https://app14743.cloudwayssites.com/blog/figma-design-testing-applitools-plugin/ Mon, 13 Oct 2025 22:00:00 +0000 https://app14743.cloudwayssites.com/?p=61531 Use the Applitools Eyes Figma Plugin to test and compare designs against your live app. Catch visual changes early to confirm UI accuracy.

The post Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Eyes Figma plugin on top of a blurry Figma frame

Even the best design systems can fall short when a layout moves from Figma to code. Fonts shift, buttons resize, and colors look a little off. These small issues result in visual drift and long review cycles between design, development, and QA.

Figma design testing with Applitools Eyes closes that gap. Export Figma frames directly to Eyes to compare what you designed with what you built using the same visual testing tools your QA teams already trust.

Design-to-Code Testing in One Place

The plugin lets you send Figma frames, including individual components, pages, or entire prototypes, straight into Applitools Eyes. Each exported frame becomes a visual baseline, the same kind used in automated tests.

Developers can run their regular visual tests against these baselines to confirm that what they’ve built matches the approved design. Meanwhile, Designers can export each new version of a design to see what changed between iterations. Everyone reviews results in the same Eyes dashboard, where visual differences appear side by side.

This shared view reduces guesswork and keeps teams aligned around what “correct” actually looks like.

How to Use the Applitools Eyes Figma Plugin

Getting started takes just a couple of minutes.

1. Install the Plugin

Open the plugin from the Figma Store, or open the Figma desktop app and select Plugins → Manage Plugins → Search “Applitools Eyes” → Install.

Applitools Eyes Figma plugin on top of a blurry Figma frame

2. Connect Your Applitools Account

Launch the plugin and enter your Applitools API key and server URL (default: https://eyes.applitools.com). These settings are saved for future use.

3. Select Figma Frames to Export

You can export a single frame, multiple frames, or a full design. The plugin automatically names them based on your Figma file, or you can customize names with dynamic parameters like {figma_filename}, {figma_page}, or {figma_frame}.

4. Adjust Settings

Optional configurations include:

  • Match level: strict, dynamic, layout, ignore colors, exact, or none
  • Contrast level: accessibility comparison thresholds
  • Auto-accept baselines: mark first exports as approved
  • And more…

5. Export and Review

Lastly, click Export to Eyes to send your selections to Applitools. Frames appear in the Eyes dashboard under the “Figma” environment. Designers and Devs can view differences directly and decide whether to accept or reject them.

Figma plugin overlayed on screenshot of Applitools Eyes comparing a Figma frame and visual test in Chrom

Three Use Cases for QA Teams

1. Design-to-Implementation Validation

Once designs are uploaded, developers can link automated tests to the same baseline using the “baseline environment name” provided by the plugin. When they run their tests, Eyes compares the live UI against the design reference.

Result: Teams catch spacing, text, or layout differences before they reach production.

2. Design-to-Design Version Comparison

Designers often revisit earlier layouts or explore small variations. Exporting both versions to Eyes highlights the exact visual differences, making it easy to review and choose the preferred version.

Result: Faster review cycles and fewer overlooked design changes.

3. Shared Visual Baselines for Collaboration

Designers, developers, and QA teams can all access the same Eyes dashboard. Instead of passing screenshots or notes, they can comment on the same visual checkpoints.

Result: Clearer handoffs and fewer miscommunications between design and engineering.

Why Visual Testing from Design to Code Matters

Designs are often reviewed visually, while code is tested functionally. However, the Figma plugin connects these two disciplines by giving both teams a consistent, visual source of truth.

For designers, it’s a way to confirm that their layouts are faithfully implemented without manually comparing screenshots. The plugin provides a reference that removes ambiguity about spacing, colors, or typography for developers. For QA teams, it introduces an additional layer of confidence that each release matches approved specifications.

This integration fits naturally into existing workflows: designs are exported once, developers test as usual, and visual checks happen automatically. What was once a manual review step becomes part of the team’s regular quality process.

Try Design-to-Code Testing for Yourself

The Applitools Eyes Figma Plugin brings visual testing into the design process, helping teams maintain consistency from mockup to release. It’s a straightforward way for design and development to share one accurate reference for how an interface should look in order to reduce manual review and give everyone confidence that what’s shipped matches what was designed.

Install the Applitools Eyes Figma Plugin and start validating your designs before code ships.

Frequently Asked Questions

What is the Applitools Eyes Figma Plugin?

The Applitools Eyes Figma Plugin lets you export frames from Figma into Applitools Eyes for visual testing. It helps teams compare their designs against live implementations or across design versions, ensuring the final product matches what was originally designed.

Why should I use the Applitools Eyes Figma Plugin?

The main reasons teams like using the plugin include:
– Detecting visual differences early in development
– Maintaining design consistency from mockup to production
– Reducing manual screenshot comparisons
– Providing a shared visual reference for design, QA, and development teams

How does Figma design testing work with Applitools?

Figma design testing with Applitools works by turning design frames into visual baselines inside the Eyes dashboard. Developers then run automated tests that capture the built UI and compare it to those baselines, highlighting any visual differences between design and implementation.

Can I compare two Figma designs using the plugin?

Yes. You can export two or more design versions to Applitools Eyes and compare them visually. The dashboard highlights differences such as layout changes, spacing updates, or color tweaks, making it easier to review design revisions before sign-off.

Do I need an Applitools account to use the Figma Plugin?

Yes. You’ll need an active Applitools Eyes account and an API key to export Figma frames to Eyes. Once connected, you can reuse your credentials for future exports.

The post Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Test Where You Build with Eyes 10.22: Visual AI for Storybook & Figma https://app14743.cloudwayssites.com/blog/visual-testing-for-storybook-and-figma/ Thu, 09 Oct 2025 17:53:32 +0000 https://app14743.cloudwayssites.com/?p=61397 Applitools Eyes 10.22 brings Visual AI testing directly into Storybook and Figma—helping teams test where they build, align design and development, and release with greater speed and confidence.

The post Test Where You Build with Eyes 10.22: Visual AI for Storybook & Figma appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Eyes Figma plugin on top of a blurry Figma frame

The new Applitools Eyes 10.22 release brings Visual AI testing directly to the tools teams already use to design and develop digital experiences. This update strengthens how teams build, validate, and release with three major enhancements:

  • Scale quality without slowing down delivery using the new Storybook Addon.
  • Align development and design with the new Figma Plugin, bridging intent and implementation.
  • Improve traceability and accountability with smarter Dashboard Optimizations for QA and leadership.

Together, these updates make visual testing faster, sharper, and more connected across the product pipeline, helping teams fully embrace shift-left testing and collaborate around a shared baseline for visual quality.

Scale Quality Without Slowing Down Delivery: Storybook Addon

Test Where You Build with the Storybook Addon

The new Storybook Addon for Applitools Eyes brings storybook visual testing directly into the development workflow. Developers can now validate UI components where they build them—no CI/CD wait times, no switching between tools.

Core Capabilities

  • Run and review visual tests directly inside your component workspace.
  • Approve or reject diffs inline and validate changes locally before merging.
  • Leverage higher concurrency for faster feedback.
  • Catch visual issues early and prevent CI failures that slow releases.

Use Case Example:
A frontend developer refactoring a shared button component runs Eyes tests directly in Storybook to verify that the update didn’t alter its appearance across the design system. By catching issues immediately, they avoid regressions on the main branch and prevent unnecessary rework later in the release cycle.

The Applitools Eyes Storybook Addon allows teams to scale quality by expanding coverage while maintaining release velocity. By embedding visual validation where developers work, organizations can expand test coverage and reduce release timelines without compromising accuracy.

See the newest features in action during the Platform Pulse webinar, coming October 22.

Align Development and Design: Figma Plugin

Bridge Design & Code with the Figma Plugin

The new Applitools Figma Plugin connects design intent with implementation through seamless Figma design testing and design-to-code comparison. Designers and developers can now validate visual consistency earlier—and with far less manual effort.

What You Can Do

  • Export Figma frames directly into Eyes for automated design-to-code validation.
  • Compare design-to-design versions to manage iteration cycles.
  • Maintain shared baselines so designers and developers work from the same source of truth.
  • Designers can validate implementations without writing code, while developers see exactly what “done” looks like.

Use Case Example:
A designer exports a component from Figma and compares it with the implemented version in Eyes to confirm that spacing, colors, and typography match the original design. Any differences appear immediately, allowing the designer and developer to resolve them early instead of during QA or after release.

This plugin bridges design and development, reducing handoff friction and review cycles. Teams gain a single source of truth for UI quality and deliver products that match design intent from concept to code.

Improve Traceability and Accountability: Dashboard Optimizations

Context-Rich Results with Dashboard Optimizations

Eyes 10.22 introduces dashboard enhancements that bring clarity and accountability to every test result. These updates help QA engineers, SDETs, and technical leaders focus on what matters most.

Key Highlights

  • Commit SHA and branch info appear directly in batch details for instant traceability.
  • “Required Attention First” sorting automatically surfaces unresolved and failed tests.
  • Auto-grouping mirrors your Storybook structure, making results easier to navigate.
  • Streamlined triage reduces noise from passed tests and accelerates review.

Use Case Example:
A QA engineer reviewing test results spots a visual regression and uses the Eyes dashboard to trace it back to the specific commit that introduced the change. With commit and branch details in context, the engineer can alert the right developer and close the feedback loop quickly.

These context-rich results strengthen accountability and compliance across teams. Organizations gain the visibility they need for auditability—particularly valuable for industries with governance or regulatory requirements.

Explore all the updates and improvements in the Eyes 10.22 Release Notes.

Broader Strategic Impact

Eyes 10.22 delivers improvements that go beyond new capabilities. Together, these updates create a foundation for stronger collaboration and faster releases across the organization:

  • Align Development & Design Around a Single Source of Truth – With the Storybook addon and Figma plugin, teams collaborate around shared visual baselines.
  • Scale Quality Without Slowing Delivery – Native testing inside development tools ensures visual coverage grows with velocity.
  • Improve Traceability & Accountability – Git-linked context and prioritized review queues make it easier to understand, resolve, and communicate changes.
  • Drive Shift-Left Adoption Across the Organization – With validation embedded where work happens, teams can catch issues earlier and release with confidence.

Quality That Scales: Visual AI for Every Team

With Applitools Eyes 10.22, teams can test where they build—bringing Visual AI testing directly into their design and development workflows. By embedding validation inside Storybook, Figma, and the Eyes dashboard, organizations can scale software quality without adding maintenance or slowing delivery. Developers, designers, and QA now share one intelligent workflow powered by Visual AI—delivering expanded test coverage, faster feedback, and less maintenance across every stage of the testing lifecycle. Learn more about Applitools Eyes.

The post Test Where You Build with Eyes 10.22: Visual AI for Storybook & Figma appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Autonomous and Eyes: New AI Features, Better Execution, and What’s Next https://app14743.cloudwayssites.com/blog/applitools-autonomous-eyes-ai-testing-updates/ Thu, 07 Aug 2025 12:27:00 +0000 https://app14743.cloudwayssites.com/?p=61068 The newest updates to Applitools Autonomous and Eyes introduce AI-assisted test creation, built-in API and data support, and previews of upcoming MCP and mobile features.

The post Applitools Autonomous and Eyes: New AI Features, Better Execution, and What’s Next appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Screenshot showing variables created by random test data generation

Test automation is essential but often time-consuming. Writing and maintaining tests, generating reliable data, switching tools for API calls, and keeping everything aligned across environments can slow down even the best teams.

The latest updates to Applitools Autonomous, part of the broader Applitools Intelligent Testing Platform, introduce features that significantly reduce this overhead. From natural language test authoring to integrated API testing and deterministic execution, these additions help teams move faster with fewer manual steps. Alongside ongoing improvements to Applitools Eyes, the platform continues to evolve to support modern testing workflows at scale.

New Applitools Autonomous Highlights at a Glance:

  • Natural language test creation powered by LLMs
  • On-the-fly test data generation
  • Enhanced API testing with visual builder
  • Deterministic test execution (no LLMs at runtime)
  • Upcoming support for mobile apps, IDE integration, and Storybook workflows

Natural Language Test Creation, Powered by LLMs

Instead of writing test steps manually or wrestling with locators, you can now describe your intent in plain English. Applitools Autonomous converts your input into executable test steps.

Autonomous interprets the instruction and adapts to your application’s context. Tests can be created by typing, recording interactions, or letting the system generate steps automatically. This approach makes test authoring more accessible, easier to maintain, and more readable across teams.

On-the-Fly Test Data Generation in Context

Need a specific persona, value range, or edge case? Autonomous now includes built-in test data generation. No external tools required.

Just describe what you need, a French fashion designer or a prime number over 1,000, and the platform generates valid, realistic data at runtime. Datasets are generated ahead of time, so test execution remains fast and predictable.

Enhanced API Testing in the Same Flow

You can now send and validate API requests directly within your test flow, using a Postman-style interface.

Author steps in several ways:

  • Describe them in plain English
  • Use raw HTTP or cURL
  • Use the interactive UI builder

Once executed, responses can be inspected, variables extracted, and values asserted. UI, API, and visual checks all operate in a single environment—no tool switching needed.

Deterministic Execution Model for Reliable AI-Powered Tests

A standout feature of this release is the deterministic execution engine behind every test.

“You don’t need to be a prompt engineer or even a developer to scale automation. But when your test runs, it executes with the speed and reliability of code.”
Adam Carmi, CTO and Co-founder of Applitools

Unlike some platforms that rely on live LLMs during execution—an approach that can be slow or unpredictable—Applitools separates test creation from test execution.

  • LLMs assist during authoring and data generation.
  • Test runs are powered by a proprietary deterministic model that ensures speed, stability, and consistent behavior.

This offers the flexibility of AI and the dependability of code, without trade-offs.

What’s Coming Next

Applitools continues to invest in both Autonomous and Eyes, with upcoming features focused on deepening cross-functional collaboration, improving performance, and expanding platform coverage.

For Applitools Autonomous:

  • Native mobile app testing: Author and execute tests across devices and operating systems.
  • Autonomous MCP server: Translate high-level test cases or BDD scenarios into full test flows.

For Applitools Eyes:

  • Eyes MCP server: Move Visual AI directly into your workflow. Maintain, review, and run tests directly from your preferred IDE.
  • Visual testing in Storybook: Approve changes directly where components are built.
  • Performance improvements for component tests: Shorter pipelines and faster feedback loops.
  • Figma collaboration enhancements: Sync designs and visual testing for consistent results.

Where Things Stand Now

Whether you’re building automation for the first time or looking to reduce the overhead of test maintenance, this release meets teams where they are. With natural language authoring, integrated data and API support, and a deterministic execution engine, Applitools helps teams reduce manual effort and work more confidently.

If you’re already using Applitools, now’s a great time to explore the latest features. If you’re just getting started, we invite you to see what’s possible with a free trial for you and your team.


Quick Answers

What new capabilities were added in the latest Applitools updates?

Applitools expanded AI-assisted authoring and integrated API/data support in Applitools Autonomous while keeping fast, deterministic execution for stability.

How does Applitools keep AI authoring reliable at run time?

By separating natural-language test authoring from deterministic execution, test runs remain fast and consistent in Applitools Autonomous (https://app14743.cloudwayssites.com/platform/autonomous/) even as teams scale.

How do these updates reduce flakiness and speed feedback loops?

Visual validation focuses on what users actually see and runs in parallel across browsers/devices with the Ultrafast Grid (https://app14743.cloudwayssites.com/ultrafast-grid), so teams get fewer false positives and faster results with Visual AI (https://app14743.cloudwayssites.com/visual-ai).

The post Applitools Autonomous and Eyes: New AI Features, Better Execution, and What’s Next appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Handling Animations and Loading Artifacts in Visual Testing https://app14743.cloudwayssites.com/blog/handling-animations-and-loading-artifacts-in-visual-testing/ Mon, 21 Jul 2025 18:12:29 +0000 https://app14743.cloudwayssites.com/?p=61002 Master dynamic content visual testing with our hands-on tutorial. Learn to capture rich UI experiences effectively.

The post Handling Animations and Loading Artifacts in Visual Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Stylized screenshot with half greyed out and other half colorized to highlight dynamic content

Have you ever encountered a situation where you try to take a screenshot, but instead of the beautifully well-crafted UI, all you’ve got is an image of a spinner/skeleton/loading screen? Handling animations and loading artifacts in visual testing can be daunting.

Don’t worry – it can happen to anyone, and we’re not here to judge you 😉

One of the SDK engineers here at Applitools, Noam, breaks this down into a hands-on tutorial, hoping it will help you get a better understanding of the industry’s best practices around visual testing of rich and dynamic UI experiences.

Let’s dive right in!

Framework Native Solutions

Most frameworks already have different mechanisms to handle animations and loading artifacts. Keeping things simple is often the best way to achieve code stability and maintainability. Using your framework’s built-in tools would most often be the best approach.

For example:

Playwright JS

// Playwright: wait for spinner to be removed
await page.waitForSelector('.spinner', { state: 'detached' });
await eyes.check()

Cypress

// Cypress: wait for spinner to not exist
cy.get('.spinner').should('not.exist'); 
cy.eyesCheck()

Selenium JS

// Selenium: wait for spinner to be invisible
const spinnerElements = await driver.findElements(By.css('.spinner'));
if (spinnerElements.length) {
  await driver.wait(until.elementIsNotVisible(spinnerElements[0]), 5000);
}
await eyes.check()

A Common Pitfall

Even if the UI appears visually unchanged, frontend frameworks like React, Vue, and Angular may re-render elements under the hood. This can lead to stale element references, especially when capturing regions right after a DOM change.

Consider the following example:

cy.get('.main').then($el => {
  cy.get('.spinner').should('not.exist'); // spinner disappears after main was located

  cy.eyesCheckWindow({
    tag: 'main',
    target: 'region',
    element: $el, // stale reference if .main was replaced
  });
});
  1. First, Cypress locates .main
  2. Then, Cypress waits for the spinner to disappear
  3. This example would fail (even if the new element has the exact same properties) if the main element is replaced by another element

How to avoid that?

When possible (e.g., Playwright), it’s preferred that you use locators instead of selectors. If you can’t, it’s better if you use selectors instead of DOM references (element: ‘.main’).

Videos, CSS Animations, GIFs

There are many techniques to eliminate other types of dynamic behaviour in web pages. Playwright, for example, provides a Clock API that allows pausing JavaScript time-related events (including JS-driven animations). It’s also possible to install custom CSS snippets to pause and reset CSS-related animations. Other JS specialized crafted snippets would be required for resetting GIFs, videos, and so on – you get the idea.

This never-ending cat-and-mouse game can be prevented by using Applitools Ultrafast Grid (UFG). Instead of rendering web pages on locally executed browsers, the UFG team maintains specialized logics and fine-tuned commands that ensure a stable and consistent rendering experience. While UFG offers more than just rendering stability, it’s worth noting that classic screenshots can still achieve stable results. UFG just makes it easier!

Algo-Based Solutions

If you intentionally want to capture dynamic content (e.g., animations, changes), a smarter strategy is to embrace that variability and use smart matching algorithms to compare just what you need, like those found in Applitools Eyes.

Any match level can be used for the entire screenshot or specific regions of the screen. Read more in the Match Level Best Practices tutorial. For example, algorithms like the Layout match level can drastically improve your experience with localization testing.

The waitBeforeCapture Setting

Performing wait operations can become more complicated when:

  1. Testing with no-code visual testing SDKs (e.g eyes-storybook)
  2. Testing with advanced Eyes features like lazyLoading and layoutBreakpoints

The waitBeforeCapture setting was invented for these types of use cases (and a few others).

This setting can receive three types of arguments:

  1. Milliseconds – the simplest approach. While it’s not always the most innovative or sophisticated pattern, in many cases, it “does the trick.” In general, waiting for explicit timeouts during tests is not recommended. However, when compared to clock manipulations and code injections, sometimes the simplicity and stability is worth the longer run-time.
  2. Selector – when we’re waiting for something to appear, most SDKs support passing a selector, and Applitools Eyes will automatically wait for an element that matches this selector to appear in the web page.
  3. Custom function – see code example
// eyes-storybook
  waitBeforeCapture: async () => {
    while (document.querySelector('.spinner')) {
      await new Promise((resolve) => setTimeout(resolve, 100))
    }
    return true;
  }

// eyes-playwright
await eyes.check({
  name: 'my-step',
  async waitBeforeCapture() {
    await page.locator('.spinner').waitFor({state: 'hidden'})
  },
})

The waitBeforeCapture setting can be defined in your applitools.config file, in your eyes.check settings, using the Target settings builder, and in other similar places. Please refer to the documentation of the specific SDK you’re using for concrete examples.

Storybook Play Functions

A nice eyes-storybook-specific trick to achieve a desired rendering state would be a Storybook Play Function.

Applitools Eyes will run your play functions and wait for them to finish before capturing anything on the screen. Use Play Functions to navigate the story to an interesting state and wait for the story to be stable inside the play function to help Eyes understand what the best time is to capture the screenshot.

Applitools is Here to Help

We hope you’ve found this article interesting, and maybe it solved some of the most common visual testing issues you may have encountered. Go ahead and try these examples out for free with Applitools Eyes.

However, if something isn’t clear or if you’d like advice regarding the best way to incorporate visual testing into your organization, please don’t hesitate to reach out to our experts! Testing is our passion, and we’re here to help.

Quick Answers

What are “loading artifacts” in visual testing, and how do I avoid flaky tests?

Loading artifacts are transient UI elements, like spinners, skeleton cards, GIFs, that appear while data is fetched. If a screenshot is captured before they disappear, your baseline image won’t match future renders, causing false failures (flaky tests).

Why do I get “stale element reference” errors after a React/Vue/Angular re‑render?

Modern frameworks often replace DOM nodes even when the UI looks identical. If you save a DOM reference (e.g., cy.get('.main')) before waiting for the spinner to vanish, that reference may point to a removed element, causing stale errors. Capture by selector or locator, not by saved element handles, to avoid this.

What is the waitBeforeCapture setting in Applitools, and what values can it accept?

waitBeforeCapture delays the screenshot after the DOM is stable. It accepts:
Milliseconds (e.g., 500)
CSS selector to wait for element presence/absence
Custom async function for complex logic (e.g., loop until .spinner hidden)

Can I use Storybook Play Functions to control the render state before visual testing?

Yes. In eyes‑storybook, Applitools runs each story’s Play Function and waits for it to finish—perfect for clicking buttons, filling forms, or pausing animations before the snapshot.


Is it better to fast-forward the JavaScript clock or add explicit waits for CSS animations?

Fast-forwarding the JS clock (e.g., page.clock.fastForward(1000) in Playwright) is usually more reliable and efficient than using hard timeouts. It advances timers without waiting in real time, making tests faster. However, it won’t affect CSS-driven animations since those still require CSS overrides to pause or skip transitions. For full stability, combine clock control with style injections or use Applitools Ultrafast Grid, which auto-handles CSS animations under the hood.

The post Handling Animations and Loading Artifacts in Visual Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Autonomous 2.2: AI-Driven Testing That Thinks Like You https://app14743.cloudwayssites.com/blog/introducing-autonomous-2-2/ Wed, 02 Jul 2025 12:56:45 +0000 https://app14743.cloudwayssites.com/?p=60836 Applitools Autonomous 2.2 introduces a smarter way to build and maintain automated tests—no code, no selectors, just natural language. With LLM test and data generation, visual comparisons across environments, and more, Autonomous 2.2 helps testers move faster with less friction and more confidence.

The post Applitools Autonomous 2.2: AI-Driven Testing That Thinks Like You appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
LLM Generated Test Steps to fill out a form as an obscure Tolkien character in Applitools Autonomous

Building and maintaining automated tests has always been a challenge, especially when it comes to speed, coverage, and collaboration across technical and non-technical roles.

The release of Autonomous 2.2 helps teams reduce the time and effort needed to build and maintain reliable end-to-end tests. It combines LLM-generated test steps, automatic test data generation, built-in API validation, and visual comparisons across environments so that teams can find bugs earlier and release with more confidence.

API Step Builder: Visual API Testing, No Context Switching

Applitools Autonomous has always supported API testing, but with Autonomous 2.2, creating API tests is easier and more accessible than ever.

The new API Step Builder lets you build, run, debug, and update API calls with the UI, directly in your workflow—no code or external tools required.

  • Paste in raw cURL or HTTP
  • Translate into readable test steps
  • Add headers, cookies, auth, body, and parameters visually
  • Chain responses into UI steps via variables

Supported Features:

  • Full HTTP method support: GET, POST, PUT, PATCH, DELETE, etc.
  • Built-in request fields for headers, cookies, authorization, body, and URL params
  • Dynamic variables: Ex. use {response.body.token} or {response.body.name} like variables in downstream steps
  • Support for raw cURL or HTTP input with automatic translation to English-readable steps

Whether you’re preparing state, validating backend responses, or asserting API behavior, you can now do it all in one place—with a clean, intuitive UI that works just like the rest of Autonomous.

LLM Generated Test Steps: From Business Logic to Automation in Seconds

Write test steps the way you think—in plain English. With LLM Generated Test Steps, you can simply describe your business logic (e.g., “Select Ohio from the state dropdown menu”) and Autonomous will convert it into executable steps.

Let’s say you’re testing a job application page. Instead of clicking through each form field manually, you simply type:

Submit the form as an obscure Tolkien character

Autonomous analyzes the rendered page in real-time, understands the structure and semantics, and breaks down that one line into multiple actionable steps, such as:

  • Typing the name and email address
  • Selecting an appropriate option from dropdown menus
  • Clicking the “Submit” button

The result? A complete, readable, and editable test created in seconds—no coding, no selectors, no extra UI to learn.

Perfect for:

  • Speeding up test creation during sprints
  • Empowering manual testers to create automated tests
  • Eliminating writer’s block in test authoring

🎥 Join our Platform Pulse webinar on July 24 to see Autonomous 2.2 in action

Test Data Generation: Realistic Input with Zero Effort

Test coverage is only as good as the data behind it. Autonomous 2.2 introduces natural language–based test data generation that instantly produces realistic data at runtime, helping with edge case coverage and state variability.

Just say what you need:

Submit the form using random data for 90’s TV dads

Autonomous parses your request, uses LLMs to create diverse, high-quality data, and injects it directly into your test. Each run can generate fresh values—so no two executions are the same.

Cross-Environment Visual Comparisons: One Baseline, Many Environments

Ensuring parity across environments—staging vs. prod, dev vs. QA—or locales is critical. Parameterized visual baselines allow test engineers to define and run tests using variable data while still maintaining deterministic baseline comparison.

For example, you can run the same test in different plans—one with {env}=integration, another with {env}=staging—and compare visual differences across each environment without changing a single line of test logic.

  • Parameters and datasets handled automatically
  • Custom flows and URL List tests supported
  • Visual baselines set per unique parameter set
  • Abort prevention and value prompts to ensure test validity
Screenshot of Autonomous results comparing a QA environment to a staging environment

Parameter handling is enforced at both test and plan levels—tests abort cleanly if required values are missing, ensuring reliable execution outcomes.

Tunneling Enhancements: Instant Feedback, Zero Guesswork

Tunneling just got easier and more reliable. With tunneling enhancements, Applitools Autonomous now includes improved diagnostics, real-time tunnel status, actionable error codes, and visibility for team admins—not just account owners

Users now get real-time diagnostics:

  • Is the tunnel connected?
  • Which hosts is it routing?
  • Are there errors, and what do they mean?
Screenshot from Autonomous showing tunnels and relevant information

Admins now get access to error codes, logs, host mapping, and verification checks—all without reaching out to support.

The Bottom Line

Applitools Autonomous 2.2 turns plain English into production-grade automation. Whether you’re testing UI, APIs, internal systems, or staging vs. production, it removes friction that slows QA teams down.

Explore the new features in the product documentation or dive into Autonomous with a free trial.


Quick Answers

What is Applitools Autonomous?

Applitools Autonomous is an AI-powered test automation platform that enables testers to create, run, and maintain end-to-end tests using plain English—no code required. It uses LLMs (large language models) to generate tests and test data automatically, removing the need for coding, locators, or frameworks.

How does Autonomous handle visual differences across environments?

With Cross-Environment Visual Comparison, Applitools Autonomous 2.2 understands what’s important visually. It automatically adjusts for environment-specific differences (like fonts or layouts), so you only get alerted to real UI regressions.

Do I need to configure test data schemas manually?

No. Applitools Autonomous understands the test context and structure of your application and test flow. You don’t need to define schemas, write mocks, or manage fixtures manually.

Can I use Applitools Autonomous for regression testing?

Yes. Autonomous is ideal for automated regression testing, as it can quickly generate test cases that cover expected behavior and catch visual or functional regressions—without manual upkeep.

Can I validate API response data using Applitools Autonomous?

Yes. You can assert specific response codes, headers, or JSON payload content. For example, “Verify that the response includes a user with ID 1234” will create the necessary assertions automatically.

The post Applitools Autonomous 2.2: AI-Driven Testing That Thinks Like You appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Creating Automated Tests with AI: How to Use Copilot, Playwright, and Applitools Autonomous https://app14743.cloudwayssites.com/blog/creating-automated-tests-with-ai/ Tue, 06 May 2025 19:14:09 +0000 https://app14743.cloudwayssites.com/?p=60297 Not all AI testing is the same. This post breaks down the differences between assisted, augmented, and autonomous models—so you can scale automation with the right tools, at the right time.

The post Creating Automated Tests with AI: How to Use Copilot, Playwright, and Applitools Autonomous appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
AI graphic with logos from Playwright, Autonomous, Copilot, and ChatGPT

The excuse “we don’t have time to write tests” doesn’t hold up anymore. AI has reshaped the way teams approach software testing, making it faster, smarter, and more accessible than ever. Tools like GitHub Copilot, ChatGPT, and Applitools Autonomous can generate reliable automated tests without slowing down your development flow.

If you’ve ever struggled with limited testing resources or hesitated to adopt AI-enhanced workflows, now is the perfect time to embrace AI-powered testing.

How GitHub Copilot Helps Accelerate Unit Test Creation

GitHub Copilot can dramatically speed up unit test creation. It can generate unit tests directly in your editor with a single prompt. For example, typing “create unit tests for Hello.tsx” in VS Code can instantly produce functional test cases using React Testing Library.

While Copilot’s first drafts were impressive—correctly using accessible locators and matching key UI elements—it’s important to note that AI-generated tests often require slight refinements.

Expecting a one-shot from AI is probably unrealistic—but in my experience, it gets you pretty darn close.

Copilot typically picks up on your dependencies, infers structure, and outputs readable, executable tests. If the results aren’t perfect, for instance, using fragile selectors or inconsistent naming, you can quickly iterate. Adjusting your prompt often resolves these issues. In many cases, reprompting is faster than manual edits.

Accessible locators and consistent naming can be enforced through clearer prompting or by storing preferences in a centralized configuration file

The key? Good prompts make a big difference. Prompting Copilot to use best practices, like favoring accessible selectors, resulted in much cleaner and more reliable output.

Taking Testing Further with Playwright and Copilot

Beyond unit tests, AI can support end-to-end testing for full user flows. Using Copilot with a framework like Playwright, you can prompt test generation by simply referencing a live URL and desired interactions.

For example, pointing Copilot to a public demo app like TodoMVC and requesting end-to-end tests will often result in tests for adding, completing, deleting, and filtering tasks—all without writing code manually.

To further improve coverage, ChatGPT can help by generating a requirements document for the app. This doc acts as a guide to ensure tests align with expected behaviors.

The better the input we provide the LLM, the better output we’re likely to get. A requirements doc is a really important piece of input.

Once the requirements are defined, you can direct the AI to use them when generating tests, producing more complete and targeted coverage. Just remember to include your preferences for things like locator strategy and naming conventions in your prompt or project config.

The message is clear: Combining ChatGPT and Copilot creates a powerful AI-assisted workflow for test generation. This approach cuts down on manual scripting while improving test depth.

Boosting End-to-End Testing with Applitools Autonomous

Applitools Autonomous handles creating automated tests with AI differently. Instead of writing code or interacting with the DOM, you provide a URL, and the system automatically scans the app. It generates visual and functional tests and organizes results into a centralized dashboard.

Highlights of what Autonomous can do include:

  • Crawl an entire application from just a URL and automatically generate visual and functional tests
  • Use plain English commands to create, edit, and validate tests (no coding needed)
  • Validate UI, behavior, and API responses in one workflow
  • Capture dynamic data like confirmation IDs, verify API responses, and support parameterization without code

Unlike traditional recording tools, Autonomous intelligently builds stable, scalable tests while seamlessly validating across browsers. It even flags hidden 404 errors—showcasing the tool’s ability to catch issues early.

Another key point is that anyone, regardless of technical background, can create sophisticated tests using natural language. At the same time, it maintains the depth and flexibility senior developers demand.

Key Takeaways for Modern Testing Workflows

Today’s AI software testing tools are designed for real-world developer needs:

  • Copilot accelerates unit and E2E test creation with natural language prompts.
  • ChatGPT fills documentation gaps by drafting requirements for better test coverage.
  • Applitools Autonomous redefines E2E testing, combining visual validation and functional flows—from UI to visual to API—and plain-English test authoring. It integrates these into a single, no-install SaaS platform.

AI doesn’t replace the tester’s critical thinking — it augments your workflow, helping you focus on improving test quality, not just checking boxes.

In Summary

The landscape of automated testing is still evolving. With tools like Copilot, ChatGPT, and Applitools Autonomous, building and maintaining high-quality automated tests no longer has to be a slow, painful process. Whether you’re a front-end engineer, QA lead, or tech manager, adopting AI-powered workflows will free up your team’s time. It will increase your confidence in releases and bring better quality to every sprint.

🎥 Want to learn more about how to create automated tests with AI? Watch the full session on demand to see in-depth demos.

Quick Answers

Can AI tools write reliable end-to-end tests?

Absolutely. AI-powered tools make end-to-end (E2E) testing faster and more comprehensive:

GitHub Copilot can generate E2E tests in Playwright by simply referencing a live app URL and describing the intended user interactions—like adding or deleting tasks in a to-do app.
ChatGPT strengthens the process by drafting a requirements document based on app functionality, which guides test creation and ensures behavior-driven coverage.
Applitools Autonomous takes it a step further by auto-generating both visual and functional E2E tests from a single URL—no code required. It scans the application, creates tests based on real user flows, and validates UI and API responses. The platform also supports natural language test commands, making advanced E2E testing accessible even to non-developers.

Together, these tools create a robust, AI-enhanced workflow that minimizes manual scripting and maximizes test depth, speed, and reliability.

What are the benefits of combining Copilot, ChatGPT, and Applitools Autonomous?

Combining these tools creates a powerful AI testing stack:

Copilot quickly builds unit and E2E tests.
ChatGPT generates requirements for better planning.
Applitools Autonomous adds full-scale, no-code testing with visual validation.

Are AI-generated tests accurate and ready for production?

AI-generated tests are often surprisingly close to production-ready. However, minor refinements—such as improving selector stability or renaming variables—are typically needed. Clear prompts and centralized configuration files help standardize and improve output.

How does Applitools Autonomous automate test creation without coding?

Applitools Autonomous auto-generates functional and visual tests by crawling your app from a provided URL. It supports natural language commands, verifies UI and API responses, and doesn’t require code, making it ideal for both technical and non-technical users. Teams can try it out for free right here.

How can AI-powered testing tools fit into agile development workflows?

AI-powered tools integrate smoothly into agile workflows by:

– Speeding up test creation.
– Reducing technical debt from manual scripting.
– Enabling continuous validation during CI/CD.
– Freeing up developers to focus on improving coverage and quality rather than writing repetitive tests.

The post Creating Automated Tests with AI: How to Use Copilot, Playwright, and Applitools Autonomous appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Named AI-Powered Test Automation Platform of the Year by CIO Review https://app14743.cloudwayssites.com/blog/applitools-ai-powered-test-automation-platform-of-year/ Mon, 07 Apr 2025 11:53:18 +0000 https://app14743.cloudwayssites.com/?p=60138 Applitools was recognized as the AI-Powered Test Automation Platform of the Year 2025 by CIO Review, highlighting innovation in intelligent, autonomous testing.

The post Applitools Named AI-Powered Test Automation Platform of the Year by CIO Review appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

We’re proud to share that Applitools has been named AI-Powered Test Automation Platform of the Year 2025 by CIO Review.

Selected by a panel of C-level executives, industry thought leaders, and the editorial team at CIO Review, this recognition highlights the meaningful progress we’re making toward truly intelligent, AI-driven testing.

“We see this as validation of our vision—to move testing beyond automation and toward intelligent systems that know what to test, when, and why.” – Alex Berry, Applitools CEO

At Applitools, our mission is to help teams ship high-quality software with greater speed and confidence. From Visual AI to Applitools Autonomous, our Intelligent Testing Platform is designed to reduce test maintenance, streamline workflows, and help teams scale testing without scaling complexity.

Read the full feature article.

As we continue evolving what’s possible in software testing, we’re honored to be recognized by industry leaders who are shaping the future of technology.

The post Applitools Named AI-Powered Test Automation Platform of the Year by CIO Review appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
No-Code End-to-End Testing with Applitools Autonomous https://app14743.cloudwayssites.com/blog/no-code-end-to-end-testing-with-applitools-autonomous/ Tue, 11 Feb 2025 15:02:23 +0000 https://app14743.cloudwayssites.com/?p=59719 Discover how Applitools Autonomous uses no-code tools, AI accuracy, and self-healing tests to enhance efficiency and scalability in end-to-end testing.

The post No-Code End-to-End Testing with Applitools Autonomous appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
No-Code Autonomous End-to-End Tests

Testing modern software applications is becoming increasingly complex. With dynamic interfaces, personalized user experiences, and rapid deployment cycles, software engineering and QA leaders need efficient solutions that keep pace with development demands. While traditional test automation struggles to provide sufficient coverage and reliability, Applitools Autonomous emerges as a powerful, scalable solution that leverages no-code testing to achieve end-to-end testing workflows.

In this post, we’ll explore how integrating no-code and AI-driven end-to-end testing can solve common challenges, streamline processes, and empower QA teams to deliver high-quality applications faster.

The Problem with Traditional Testing

Many QA teams continue to rely on outdated methods that make it difficult to scale. Here are some key issues with conventional test automation approaches:

  • Limited Test Coverage
    Traditional testing tools often fail to provide complete coverage. Most teams automate less than 20% of test cases, leaving large portions of applications vulnerable to defects.
  • High Maintenance Overhead
    Scripts break with even minor UI updates, requiring significant manual effort to fix.
  • False Positives
    Conventional pixel-based validation frequently flags irrelevant discrepancies, leading to wasted time and effort.

These limitations slow development cycles and increase costs—making it difficult to deliver the seamless, trustworthy experiences users expect.

The Promise of No-Code End-to-End Testing

Applitools Autonomous offers a fresh approach by combining no-code capabilities with AI-powered automation. This allows teams of all skill levels to create and maintain end-to-end tests with minimal effort while achieving broader test coverage and better accuracy.

Key Benefits of No-Code End-to-End Testing

  • Simplified Test Creation: With no-code functionality, creating tests doesn’t require advanced coding skills. The Autonomous platform allows teams to write test flows in plain English or with an interactive browser, simplifying test authoring for non-technical users and engineers alike. A simple command like “Click the Login button” is all it takes to generate testing steps. 
  • Effortless Scalability: Scalable end-to-end testing is easy with the Autonomous platform. Whether you’re testing small applications or large enterprise systems, the tool adjusts to your needs while maintaining consistency across devices, browsers, and resolutions.
  • Self-Maintaining Tests: Tests created with Autonomous adapt automatically to UI changes, drastically minimizing maintenance. This ensures QA efforts remain efficient, even as your application evolves.
  • AI-Driven Visual Validation: Applitools’ industry-leading Visual AI offers powerful visual testing capabilities. It intelligently identifies functional and visual changes in an application’s UI, avoiding false positives and ensuring a seamless user experience.
  • Seamless Integration: Fully cloud-based and compatible with existing workflows, the Autonomous platform requires no local installations or complex configurations.
  • API Testing Made Simple: For teams that need to validate backend and API responses alongside front-end testing, the Autonomous platform integrates API testing seamlessly into end-to-end workflows. 

Functional Testing in Action

The Autonomous platform simplifies every aspect of end-to-end testing. Testers can automate complex processes in minutes. For example, to validate workflows like contact forms or checkout processes, users can author functional tests in plain English (or even another language)—no coding required. This feature ensures testing of every critical business flow with precision. 

Why No-Code End-to-End Testing is a Game Changer

With built-in support for API testing, data-driven tests, and advanced visual validation, the platform ensures QA teams have everything they need to run comprehensive, scalable end-to-end tests. And by eliminating barriers to automation, the no-code approach lets QA teams focus on what truly matters—improving application quality and delivering a superior customer experience.

Major enterprises using Applitools’ Autonomous platform have already seen measurable benefits:

  • Five-fold Increase in Test Coverage—extending seamless automation across devices and browsers.
  • 35% More Defects Detected Early—preventing costly errors from reaching production.
  • Hundreds of Hours Saved Per Release—thanks to minimized test maintenance and faster execution.

These results highlight the efficiency and precision that no-code end-to-end testing can bring to development cycles.

Unlock the Full Potential of End-to-End Testing

Applitools Autonomous is redefining how QA teams approach application testing. By combining no-code capabilities with advanced AI technologies, it offers a complete solution for comprehensive end-to-end testing across modern software applications.

Get a Closer Look

If you’re interested in seeing more, the full webinar is available on-demand and dives deeper into real-world use cases. Additionally, you can explore the platform firsthand with a free 14-day trial that allows you to test all the features on your own projects, giving your team the opportunity to experience the efficiency and accuracy it brings to end-to-end testing. Sign up for your free trial now.

The post No-Code End-to-End Testing with Applitools Autonomous appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Leveraging Applitools for Seamless Visual Testing in Playwright https://app14743.cloudwayssites.com/blog/leveraging-applitools-for-seamless-visual-testing-in-playwright/ Fri, 31 Jan 2025 21:29:23 +0000 https://app14743.cloudwayssites.com/?p=59583 As applications become more complex and UI consistency becomes critical, ensuring that the user interface appears as expected across multiple environments is key. Applitools Eyes, when integrated with the Playwright...

The post Leveraging Applitools for Seamless Visual Testing in Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

As applications become more complex and UI consistency becomes critical, ensuring that the user interface appears as expected across multiple environments is key. Applitools Eyes, when integrated with the Playwright SDK, provides a powerful, efficient, and streamlined approach to visual testing in your Playwright tests. The combination of Playwright, a popular end-to-end testing framework, and Applitools Eyes, a visual AI-powered testing tool, makes visual validation easier, faster, and more scalable.

Applitools Eyes is a visual AI-powered testing tool that helps address these UI consistency challenges. It uses advanced AI-driven image comparison to detect visual differences. Unlike traditional visual testing tools that perform pixel-by-pixel image comparisons, Applitools Eyes uses AI algorithms to determine if the visual differences are actual bugs.

Applitools Playwright SDK introduces several improvements designed to streamline the process of setting up and running visual tests, making the experience more efficient and user-friendly.

In this blog, we will discuss when it is appropriate to use visual testing with Playwright and which cases will be suitable for using Applitools.

Overview of Visual Testing

Visual testing is a technique performed to ensure the visual appearance of a given website or application matches the provided design and layout . This includes the comparison of the actual UI against a standard image or a reference UI image. This type of testing primarily focuses on detecting any visual anomalies, which include alignment issues , font problems, color discrepancies , images, or structural shifts that can disrupt the user experience.

How Visual Testing Differs from Functional Testing

Visual testing and functional testing are both crucial if the goal is to deliver a high-quality application; visual testing checks the visual appeal of the application, while functional testing verifies how the application functions.

Let’s see some major differences between visual and functional testing.

AspectVisual TestingFunctional Testing
PurposeEnsures the layout and design of the UI look as expected.Verifies that the application’s functions work as intended.
ScopeFocuses on UI elements’ appearance (position, dimensions, colors, fonts, etc.).Focuses on the application’s behavior, verifying logic, workflows, and system responses.
ApproachUses image comparison tools to detect visual discrepancies.Uses test scripts or user simulations to validate functionality.
TechniquesRelies on image or screenshot comparison against baseline images.Involves input/output verification, user interaction simulation, and data validation.
ToolsTools like Playwright, Cypress, and other functional testing frameworks.Tools like Playwright,Cypress, and other functional testing frameworks.
Issues DetectedIdentifies layout errors, pixel misalignments, broken images, wrong colours, or responsive design bugs.Identifies logical errors, broken workflows, malfunctioning features, or incorrect data processing.
Use CaseSuitable for detecting unintended UI changes during development.Suitable for validating features and ensuring correct application logic.

Before delving into how we can perform visual testing using Playwright and Applitools, let’s review some of the new improvements that Applitools has introduced .

What’s New In Applitools Playwright SDK?

The Applitools Playwright SDK is a library that integrates Applitools’ visual testing capabilities with the Playwright automation framework.

Let’s explore the improvements done by Applitools to streamline the process of setting up and running visual tests, making the experience more efficient and user-friendly. 

Here’s a detailed breakdown of what’s new in the updated SDK:

Test Fixtures:

  • Test fixtures Centralize and reuse setup code across multiple tests, making the setup for visual tests more consistent and reducing redundant configuration.
  • This feature is particularly valuable when running tests on multiple pages or scenarios, as it simplifies the initialization of Applitools Eyes in each test.

CLI Onboarding:

  • The Command-Line Interface (CLI) simplifies the initial setup process by automating configuration steps, helping users quickly integrate Applitools Eyes with Playwright. 
  • This is especially helpful for new users, reducing the complexity of the setup and enabling faster onboarding into visual test creation.

Config Object Setup:

  • The configuration object setup automates the insertion of Applitools Eyes settings into the Playwright configuration file (e.g., playwright.config.js or playwright.config.ts). 
  • This feature eliminates manual setup, reducing the risk of errors, ensuring accurate configuration from the outset, and saving developers valuable time.

Custom HTML Reporter:

  • The custom HTML reporter enhances Playwright’s default test report by integrating visual test results from Applitools Eyes. 
  • This allows developers to view a direct comparison between the baseline image and the current test screenshot, helping identify visual regressions more effectively. 
  • By adding visual results to the standard Playwright report, it simplifies the review process, offering a comprehensive overview of the test outcomes.

Before we deep dive into how Applitools helps in visual testing, let’s first see how we can perform visual testing using Playwright.

Visual Testing Using Playwright

Playwright comes equipped with tools to make visual testing effortless, allowing developers to take and compare screenshots of web pages or specific web page elements. 

Here’s how visual testing works in Playwright:

  1. First Run: Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.
  2. Subsequent Runs: In the next run, Playwright takes a new screenshot and compares it against the base image.
    • If there are no differences, the test passes.
    • If differences exist, the test fails, flagging the areas where changes occurred.

Prerequisites

Ensure the following tools are installed:

Node.js: Download and install from Node.js.

Visual Studio Code (Optional): Recommended IDE for coding.

Install Playwright:

npm i @playwright/test

Write the Simple Visual Regression Test Using Playwright
Create a new JavaScript file in your test folder, e.g., demo.spec.js.

Use the page.screenshot() method to capture the screenshot and the expect.toMatchSnapshot() assertion from the @playwright/test module to compare images.

const { test, expect } = require('@playwright/test'); 
test('Visual Regression Test Example', async ({ page }) => { 
   // Navigate to the website 
   await page.goto('https://playwright.dev'); 
   // Capture a screenshot of the page 
   const screenshot = await page.screenshot(); 
   expect(screenshot).toMatchSnapshot(); 
}); 

When we execute the above code for the first time, the test case fails with the below error, and Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.

When we execute the same test case again, it will pass.

Visual Regression Test with maxDiffPixels option 

There might be the case when there are some minor differences in the images and the test case starts failing; to handle this situation we have an option, the ‘maxDiffPixels option,’ that we can pass. It allows you to specify the maximum number of pixels that can differ between two images for the comparison to still be considered successful.

const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ maxDiffPixels: 100 });
}); 
const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ threshold:0.5} );
}); 

Visual Regression Test with Threshold option

Passing ‘threshold’ as an option is another option to avoid failing the test case. This is particularly useful for handling slight differences that occur due to rendering variations.

const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ threshold:0.5} );
}); 

So far we have seen how we can automate the visual UI using Playwright. In the next section you will see how we can use Applitools Eyes and Playwright SDK together to automate visual testing.

Visual Testing using Applitools Eyes and Playwright SDK

Visual testing is a crucial aspect of modern software quality assurance. It ensures that a web application’s UI renders correctly across different devices, screen sizes, and browsers. Tools like Applitools Eyes and frameworks like Playwright SDK simplify this process by providing robust solutions for automated visual regression testing.

Below are the steps to set up Applitools Eyes and Playwright SDK.

Install Playwright

npm init playwright@latest

Install Applitools Eyes SDK 

npm install -D @applitools/eyes-playwright

Add the below line in .spec.js file 

To configure your project for Applitools Eyes, we have to add the below line.

import { test } from '@applitools/eyes-playwright/fixture';

In the next section you will see examples to explain how we can use SDK with Playwright tests. 

Before moving in the detail of various types of matchLeavel, let see one basic example to understand how we do the visual testing in Applitool,  and the method used in Applitools, to capture and validate a checkpoint in your application’s UI.

Below is code that integrates Applitools Eyes with Playwright to perform a visual test.

import { test } from '@applitools/eyes-playwright/fixture';
test('My first visual test Using Applitools with matchLevel: Dynamic', async ({ page, eyes }) => {
await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  // Visual check
  await eyes.check('Landing Page', {
    fully: true,
    matchLevel: 'Dynamic',
  });
});

In the above code we mainly use two things: one is the method eyes. check() and two options: {fully: true}, and {matchLevel: ‘Dynamic’}

eyes.check(): Performs a visual snapshot of the page or a specific element and compares it with the baseline image stored in Applitools.

Options:

  • fully: true:
    • Captures the entire page, including scrollable sections.
    • Ensures that all content on the page is included in the visual test.
  • matchLevel: ‘Dynamic’:
    • A matching algorithm that focuses on the content while ignoring dynamic changes, such as text or layout differences.
    • Useful for pages with frequently changing content, like user-generated text or dynamic data.

Below are the options that we can pass in our test case. These options provide flexibility and precision, enabling robust visual testing tailored to different scenarios and challenges.

This table summarizes each configuration, making it easier to understand their purposes.

OptionPurposeExample Use Case
matchLevel: ‘Dynamic’Ignores minor layout or content changes (e.g., dynamic text, animations).Testing pages with frequent dynamic content (e.g., dates, real-time updates) without raising false alarms.
matchLevel: ‘Strict’Detects even small pixel or layout changes for precise visual validation.Validating critical UI components or designs where every pixel matters (e.g., brand logos, product pages).
region: componentFocuses the visual check on a specific component or region of the page.Testing a new or modified UI component in isolation (e.g., buttons, headers, or form fields).
fully: trueCaptures and validates the entire page, including content outside the viewport.Ensuring the layout and content of a long webpage or scrolling section is consistent.
ignoreRegions: [dynamicContent]Excludes specific regions with dynamic content from the validation.Ignoring areas like ads, live feed sections, or widgets with fluctuating content (e.g., real-time graphs).

Examples with different matchLevel

MatchLevel determines how the captured visual output is compared with the baseline image.Let’s see the examples with different options.

1. Visual Testing with matchLevel: Strict

This ensures pixel-perfect comparison between the baseline and current screenshot, highlighting any visual difference, no matter how small.

test('My first visual test Using Applitools with matchLevel: Strict', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  await eyes.check('Landing Page', {
    fully: true,
    matchLevel: 'Strict',
  });
});
  • Purpose: Validates the entire page with Strict match level.
  • Strict Match Level: Identifies even small differences, such as minor pixel or layout changes, ensuring high precision in visual validation.

2. Visual Testing with matchLevel: Dynamic

Focuses on content consistency by ignoring minor layout differences, such as text movement, while ensuring key visual elements remain unchanged.

In the below example you can see even if the text color is changed after clicking on link ‘?diff2’ test case still works perfectly fine because we have set matchLevel: Dynamic. This ensures the visual test focuses on the content and structure of the text rather than superficial changes like color.

test('My first visual test Using Applitools with matchLevel: Dynamic', async ({ page, eyes }) => {
 await page.goto('https://app14743.cloudwayssites.com/helloworld/');
 await page.getByRole('link', { name: '?diff2' }).click();
 await eyes.check('Homepage', {
   fully: true,
   matchLevel: 'Dynamic',
 });
});
  • Purpose: Validates the entire page with Dynamic match level.
  • Dynamic Match Level: Ignores minor content/layout differences (e.g., text changes, animations) and focuses on high-level structural comparisons.
  • fully: true: Captures a screenshot of the full page, not just the visible viewport.

3. Visual Testing for a Specific Region/Component

Limits visual comparisons to a particular area or component of the application.

test('My first visual test Using Applitools with particular region/Component', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  const component = page.locator('.fancy.title.primary');
  await eyes.check('My Component', {
    region: component,
  });
});
  • Purpose: Tests only a specific component or region on the page.
  • region: Focuses the validation on the area defined by the component locator (.fancy.title.primary), instead of the entire page.

4. Visual Testing with Ignored Regions

Excludes specific areas from comparison, preventing false positives caused by expected dynamic changes in those regions.

test('Visual test Using Applitools with ignoring the region', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  const dynamicContent = page.locator('.fancy.title.primary');
  await eyes.check('Homepage', {
    fully: true,
    matchLevel: 'Strict',
    ignoreRegions: [dynamicContent],
  });
});
  • Purpose: Tests the page while ignoring specific dynamic regions.
  • ignoreRegions: Excludes certain areas from the visual validation (e.g., areas with dynamic content like dates, ads, or animations).
  • fully: true: Captures the entire page for validation.

About dynamic match level

Dynamic match level is a feature in Applitools Eyes that verifies text by matching it against predefined or custom patterns, ensuring the content adheres to a specific format. It focuses on format validation rather than content changes, making it ideal for dynamic text like dates, emails, or numbers.

Type of dynamic match level

Dynamic match levels allow for flexible verification of text by matching it against predefined or custom patterns. The default types include:

  • TextField: Text inside input fields.
  • Number: Numeric values like ZIP codes or phone numbers.
  • Date: Validates text as a date in proper format.
  • Link: Hyperlinks or URLs.
  • Email: Checks for a valid email address format.
  • Currency: Recognizes monetary values.

These types ensure accurate validation by focusing on the format rather than the content changes, such as dynamic updates to dates or numbers.

Custom dynamic pattern

To create a custom dynamic pattern we have to follow below steps in Applitools dashboard 

Here are the steps to create a custom dynamic pattern:

  1. Open the Settings Window: In the Page Navigator, select Apps & Tests.

In the list of applications on the left, hover over an application and click > Settings.

  1. Add a Custom Type: Click Add custom type.

  1. Define the Custom Pattern:

Enter a name and a Regex pattern.

  1. Save the Custom Pattern:

In the below screenshot you can see we have set a custom pattern for Zip Code. Once the custom pattern is created, its name can be used in the code to reference the pattern.

Click Add will save the entered pattern. 

  1. Apply the Settings:

Once all required patterns have been selected, click Apply settings.

Applitools Dashboard: Execute visual testing using the Applitools Playwright SDK

Applitools’ Dashboard is a powerful interface designed to manage and analyze test results efficiently, especially for visual testing and monitoring user interfaces.

Below are the steps that we normally have in the Applitools Dashboard when we execute any visual test case.

Precondition : Run the below command to export the key 

export APPLITOOLS_API_KEY='your_api_key_here'

Once the key is exported, the next step is to execute the test cases using the below command.

npx playwright test --ui tests/visual.spec.js

First Run: Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.

Subsequent Runs: In the next run, Playwright takes a new screenshot and compares it against the base image.

  • If there are no differences, the test passes.
  • If differences exist, the test fails, flagging the areas where changes occurred.

In the below screenshot in the Applitools Dashboard, you can see all the above four test cases are executed successfully.

Applitools Playwright SDK with Example in Detail

Let’s take an example of a particular component in the page. In the below screenshot, the component focuses on a specific component or region of the page. You can see only the ’Hello World’ part is tested. In case there is any change in this component, the test case will fail.

In the below example we have updated the component with the text ‘Happy World’ 

test('Visual test Using Applitools with Updating Component UI', async ({ page, eyes }) => {
await page.goto('https://app14743.cloudwayssites.com/helloworld/');
await page.getByRole('link', { name: '?diff2' }).click();
await eyes.check('Homepage', {
  fully: true,
  matchLevel: 'Strict',
 });
});

When we execute the above code, it will not execute successfully because the UI of the component is changed after clicking on the link ‘diff2.’

To fix the above problem, we have marked it resolved from the Applitools Dashboard. Once we accept the change, it becomes the base image.

Now when we execute the test case again, it gets executed successfully with the updated UI.

When Playwright’s Visual Testing Features Are Useful 

Playwright’s built-in visual testing features are well-suited for scenarios where:

Simple Visual Comparisons:

  • Comparing entire pages or specific UI components with baseline images for detecting pixel-level differences.
  • Example: Ensuring the homepage layout hasn’t shifted after a CSS update.

Static UIs:

  • Testing applications with minimal dynamic or animated content, as these are less prone to false positives caused by animations or transient elements.

Fast Feedback:

  • When quick pixel-by-pixel comparisons in CI pipelines are sufficient without advanced AI-powered analysis.
  • Example: Smoke testing in a fast-moving development cycle.

Budget-Friendly:

  • Playwright’s native screenshot and comparison capabilities don’t require additional tools or subscriptions.

When Playwright’s Visual Testing May Fall Short

Dynamic Content:

  • UIs with dynamic or frequently changing elements (e.g., time, randomized content, or animations) can cause false positives due to exact pixel mismatches.
  • Example: A dashboard with live-updating graphs.

Complex Visual Changes:

  • Playwright’s threshold-based comparison may miss subtle or semantic changes that don’t exceed the defined pixel difference threshold.
  • Example: Slightly altered typography or colour changes.

Ignored Regions:

  • While Playwright allows element-specific testing, ignoring specific dynamic regions within a larger page requires custom logic.
  • Example: Excluding advertisements or live widgets from visual comparison.

How Applitools Handles Situations Better

Applitools offers superior visual testing capabilities compared to Playwright, especially when handling dynamic content, ensuring pixel-perfect validation, focusing on specific components, and reducing false positives. 

Dynamic Content Handling:

  • Match Levels: Applitools’ AI-powered matchLevel options (e.g., Strict, Layout, Content) allow intelligent comparison by focusing on structure and ignoring irrelevant pixel differences.
  • Example: Testing a real-time dashboard with dynamic data but a consistent layout.

Ignored Regions:

  • Applitools allows you to define ignored regions to exclude specific parts of a page from comparison (e.g., ads or timestamps).
  • Example: Excluding a “current time” widget from visual testing.

Advanced Visual Validation:

  • Features like dynamic regions and AI-based semantic analysis help detect visual regressions that go beyond pixel-by-pixel differences.
  • Example: Identifying a button misalignment that might not be detected by Playwright’s threshold settings.

Visual Testing Across Components:

  • Allows you to target specific regions or components for testing without manually cropping or capturing screenshots.
  • Example: Testing the styling of a navigation bar or modal dialog.

Conclusion

Playwright’s built-in visual testing features are suitable for simple cases where you need pixel precision in image comparisons of static UIs. However, it may struggle with complicated designs, whether they involve minor changes within the site or more intricate design requirements .

Applitools Eyes, which employs artificial intelligence, is more suitable for overcoming these difficulties. It performs extraordinarily in cases with dynamic content; it easily identifies subtle visual regressions and is equipped with extra functionalities such as ignored areas and cross-browser testing. For teams with significant testing and enhancement requirements, where the built-in visual testing capabilities of Playwright may fall short for complex and large-scale application testing, Applitools serves as an invaluable complement. It ensures and elevates the visual quality of dynamic applications, making it an ideal choice for robust testing and quality assurance.

About the Author

Kailash Pathak (Applitools Ambassador | Cypress Ambassador)

Senior QA Lead Manager with over 15 years of experience in QA engineering and automation. Kailash holds certifications including PMI-ACP®, ITIL®, PRINCE2 Practitioner®, ISTQB, and AWS (CFL).

As an active speaker and workshop conductor, Kailash shares his expertise through blogs on platforms like Medium, Dzone, Talent500, The Test Tribe, and his personal site https://qaautomationlabs.com/


Quick Answers

How do I add Applitools Eyes to an existing Playwright project?

Follow the Playwright Quick Start to install the SDK, run CLI setup, and execute your first visual test (https://app14743.cloudwayssites.com/tutorials/playwright).

What’s the difference between Playwright functional checks and visual testing?

Functional asserts confirm behavior (clicks, responses, state), while visual testing validates the rendered UI—catching regressions in layout, fonts, colors, and dynamic content that functional checks overlook.

How do I run cross-browser visual tests quickly in CI?

Use the Ultrafast Grid to fan out visual checkpoints across real browsers/devices in parallel without maintaining an in-house grid (https://app14743.cloudwayssites.com/ultrafast-grid).

How do I reduce flakiness in Playwright tests?

Favor stable visual checkpoints over brittle locator assertions, use consistent viewports and network conditions, and rely on Visual AI to ignore non-material diffs (https://app14743.cloudwayssites.com/platform/validate/visual-ai/).

The post Leveraging Applitools for Seamless Visual Testing in Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design https://app14743.cloudwayssites.com/blog/autonomous-2-1-release/ Tue, 07 Jan 2025 15:41:23 +0000 https://app14743.cloudwayssites.com/?p=59297 Testing workflows just got an upgrade! Autonomous 2.1 simplifies end-to-end testing by offering faster test creation and maintenance while expanding mobile support. This release focuses on improving the efficiency and...

The post Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Autonomous reusable test flow screenshot

Testing workflows just got an upgrade! Autonomous 2.1 simplifies end-to-end testing by offering faster test creation and maintenance while expanding mobile support. This release focuses on improving the efficiency and flexibility of your testing processes with features designed to simplify workflows, enhance modularity, and provide greater control.

Let’s explore the highlights of this release and how they can benefit your end-to-end testing. 

Key Features in Autonomous 2.1

Modular Test Design

Create a test once and reuse it across multiple workflows, eliminating the need to duplicate steps. This not only saves time but also ensures consistency and reduces the risk of errors. By enhancing reusability, this feature significantly accelerates test creation and simplifies maintenance, particularly for workflows shared across teams.

Use case example: Faster test authoring in Banking

For financial institutions, reusing a shared “User Login” test across workflows like “Fund Transfer,” “Loan Application,” and “Account Overview” simplifies test maintenance and ensures consistency. Autonomous 2.1 empowers teams to avoid recreating login steps for each flow, saving time and enhancing reliability.

Simplify Test Maintenance

Easily modularize your testing process by extracting repeated steps into standalone tests. This feature simplifies maintenance and improves by allowing your team to maintain one test and apply those updates to wherever you may be reusing extracted test flows. With fewer errors and a more streamlined test design, your team can focus on delivering quality faster.

Learn how to build no-code, autonomous end-to-end tests and explore some of these new features in our upcoming webinar, Building No-Code Autonomous End-End Tests

Record Assertions

Record assertions directly within your tests to ensure critical content is displayed accurately, without requiring manual scripting. This capability is especially valuable for localization efforts, enabling teams to validate translated content on multilingual websites. By reducing setup complexity, testers can achieve faster and more accurate validations.

Use case example: Maintain compliance with localization

On multilingual websites, recording assertions ensures translated content (e.g., terms and conditions, error messages) is displayed correctly. This is crucial for maintaining legal compliance and providing a seamless user experience in different languages.

Make Responsive Testing Easier

Ensure cross-device compatibility by configuring display viewport sizes—such as mobile, tablet, or even a custom size—directly in your custom flow tests. This ensures accurate validations across devices, making it ideal for e-commerce businesses and other industries prioritizing multi-device compatibility. Streamlining these configurations reduces setup time and ensures your users have a seamless experience across platforms and devices.

Pro tip: While Applitools Ultrafast Grid lets you run the same test across multiple browsers and devices, we recommend creating separate tests for specific screen sizes when unique screen elements, like a hamburger menu, appear only on certain viewports.

Flexibility in Test Configuration

We’ve decoupled the application and plan from individual tests, offering greater flexibility and a more intuitive workflow for configuring tests independently. You can now run tests ad-hoc—even trigger test execution via the REST API—and configure run properties like parameters, browsers, devices, and concurrency limits. This change not only enhances the user experience but also lays the groundwork for advanced capabilities in future releases, ensuring a scalable and adaptable testing environment.

And There’s More…

Autonomous 2.1 lets you navigate more intuitively with an updated interface reflecting application hierarchy and copy tests across applications while preserving reusable flows. For added security, sensitive test data like passwords is now encrypted and masked, and TOTP support for 2FA enables secure workflows with Time-Based One-Time Passwords.

Try Autonomous 2.1 for Free With Your Team

With Autonomous 2.1, you’ll experience faster test creation and simplified maintenance, particularly for workflows reused across multiple teams or applications. This release represents a significant step toward more efficient, scalable, and user-friendly testing processes.

Try out Autonomous 2.1 for free with your team and see how it can revolutionize your testing efforts.

The post Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design appeared first on AI-Powered End-to-End Testing | Applitools.

]]>