Visual Testing Tools Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/tag/visual-testing-tools/ Applitools delivers full end-to-end test automation with AI infused at every step. Thu, 19 Mar 2026 20:19:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies https://app14743.cloudwayssites.com/blog/playwright-visual-testing-strategy/ Thu, 19 Mar 2026 20:19:13 +0000 https://app14743.cloudwayssites.com/?p=62370 Visual testing in Playwright often forces teams to choose between strict failures, snapshot maintenance, and CI pipeline complexity. This article explores how a single configuration flag introduces three different strategies for handling visual differences and improving the Playwright developer experience.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Hello everyone! I’m Noam, an SDK developer on the Applitools JS-SDKs team. While my day-to-day focus is on core engineering, I work closely with our field teams and occasionally join technical deep-dive sessions with customers.

In these conversations, we frequently encounter questions about performance and the engineering philosophy behind our integration. Specifically, there is often curiosity about how to make visual testing feel more “Playwright-native” and natural to developers.

In this post, I’ll share the design logic behind these architectural choices so you can apply these patterns in your own CI pipelines in a way that fits your organization’s needs.

Adding unresolved to Playwright

Integrating visual regression testing into Playwright requires combining two different status models: Playwright’s binary Pass/Fail and the visual testing concept of unresolved.

In visual testing, instead of having two (passed and failed) states, there’s an additional third state: unresolved. This state indicates a difference was detected, but a human decision is required to determine if it is a bug or a valid change that should be approved as a new baseline.

​Playwright doesn’t support this third state out of the box. Visual test maintenance using Playwright’s native toHaveScreenshot API forces the developer into a cumbersome cycle requiring three separate test executions:

  1. First, the developer needs to run to see the failure.
  2. Then, they need to run with the --update-snapshots flag to create new baseline images.
  3. Then, most developers would run again to validate that everything works with the updated baseline as expected—which isn’t always the case, because the Playwright native comparison method (pixelmatch) tends to be very flaky, unlike Visual AI.

​After this local cycle, the developer must commit the new baseline images to the repository—bloating the git history—and wait for a new CI execution to provide final feedback. For dev-centered organizations that focus on feedback loop velocity, this workflow is… suboptimal. Personally, I believe that’s one of the reasons visual testing isn’t as popular as it should be among Playwright users.

​When we engineered the Applitools fixture, one of our goals was to support this Unresolved state natively, without disrupting Playwright’s core lifecycle—specifically its Worker Processes and Retry mechanisms.

The solution rests on two key engineering decisions: moving rendering to the background (async architecture) and giving developers control over the exit signal and performance tradeoffs (failTestsOnDiff).

We don’t block test execution when Applitools is rendering

The core value of visual testing lies in AI-based comparison to eliminate false positives and multi-platform rendering.

Architecturally, these processes are cloud-native services.

  • AI-as-a-Service: Just like massive LLMs or other generative models, the Visual AI engine runs on specialized cloud infrastructure optimized for heavy inference. It cannot simply be “installed” on a lightweight CI agent.
  • Platform Constraints: Authentic cross-platform rendering (e.g., iOS Safari on a Linux CI agent) is physically impossible on a single local machine.

Since these operations inherently occur remotely, performing them synchronously would force the local test runner to idle while waiting for network round-trips and cloud processing.

To solve this, we designed the fixture around an asynchronous architecture:

  • Instant Capture: When eyes.check() is called, we synchronously capture the DOM and CSS resources (instead of a rasterized image). This operation is extremely fast.
  • Immediate Release: We purposefully use soft assertions by design. We release the Playwright test thread immediately so the functional logic can proceed to the next step or test case without blocking.
  • Background Heavy Lifting: The heavy work—uploading assets, rendering across different browsers and operating systems, and performing the AI comparison in the Applitools cloud—starts immediately in the background, managed by the Worker process.

The “Draining Queue” Effect

​This architecture explains why the Playwright Worker sometimes remains active after the final test completes.

​The background tasks are limited only by your account’s concurrency settings, and the screenshot size. For example, when rendering a 10,000 px page on a small mobile device, the rendering infrastructure might need time for scrolling and stitching. If your functional tests execute faster than the background workers can process the queue (rendering & comparing), the Worker process stays alive at the end solely to “drain the queue” and ensure data integrity.

While it does ensure your test logic runs at maximum speed, offloading the processing cost to the background, this experience might cause friction and frustration as the developers see that workers are “hanging” after tests are completed. When facing such issues, our support team is here to advise and assist with various solutions—we can investigate execution logs and if needed even make custom suggestions to tailor Eyes-Playwright to your needs.

Solving the Matrix Problem

​Standard Playwright documentation recommends defining multiple projects in playwright.config.ts to cover different browsers (Chromium, Firefox, WebKit) and various viewport sizes.

​While this ensures coverage, it introduces a linear performance penalty (O(N)). To test three browsers across two viewports, your CI must execute the functional logic (clicks, waits, navigation) six times. It’s 6x more load on the CI machine and the testing environment.

​We recommend shifting this workload to the Ultrafast Grid (UFG).

​In this mode, you execute the Playwright test once, typically on Chromium. We upload the DOM state, and our cloud infrastructure renders that state across all configured browsers and viewports in parallel.

This transforms an O(N) execution problem into an O(1) execution problem, significantly shortening the feedback loop.

The Strategy: failTestsOnDiff

​Since the actual comparison happens asynchronously and potentially completes after the test logic finishes, we need a mechanism to map the visual result back to the Playwright status.

​This is controlled by the failTestsOnDiff flag. It’s not just a boolean; it’s a strategic choice for your CI pipeline.

  • The Logic: This is the configuration our own Front-End team uses. We believe that Visual Change Test Failure.
  • Behavior: The Playwright test passes (Green). The unresolved status is reported externally via our SCM integration (GitHub/GitLab).
  • Why: Retrying a visual test is computationally wasteful—the pixels won’t change on the second run. By keeping the test “Green,” we avoid triggering Playwright’s retry mechanism. The decision is moved to the Pull Request, where it belongs.

Read more about SCM integration or hop directly to our GitHub, Bitbucket, Gitlab or Azure Devops articles.

  • The Logic: You need a “Red” pipeline to block deployment, but you want to avoid the noise of retries and gain a significant performance improvement.
  • Behavior: Individual tests pass, but the Worker Process exits with a failure code if any diffs were found in the suite.
  • Why: This provides a hard gatekeeper for the build status. It allows the Eyes rendering farms to continue processing visual test results in the background without blocking the execution thread, allowing the worker to move on to handle more tests efficiently.
  • The Logic: Immediate feedback loop.
  • Behavior: Fails the test immediately in the afterEach hook.
  • Why: Best for local development where you want to see the failure immediately in the console. It is also useful if you use the trace: retainOnFailure setting in Playwright, as it ensures traces are preserved for unresolved visual assertions. Not recommended for CI due to the retry loops described above.

TL;DR – When to use each setting

Mode afterEach afterAll false
Performance Less performant
The Playwright worker will wait after each test for all renders to be completed and for the visual AI to compare the results
Best performance
The Playwright workers will collect the resources and manage the rendering and Visual AI comparisons in the background
Best performance
Similar to afterAll
Observability Best
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as failing
Good
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as passing. You will get a failure of the worker process, and other reporters won’t link it to a specific test case.
Great in pull request (If SCM integration is enabled).
The Applitools reporter will reflect the tests perfectly. Other reporters will consider unresolved tests as passing.
Best fit Local testing Local testing AND
CI environments without SCM integration
CI environments with SCM integration

Closing the Visibility Gap: The Custom Reporter

​If you adopt Strategy A (false) or Strategy B (afterAll), you introduce a secondary challenge: Visibility.

Since Playwright technically marks these tests as Passed to avoid retries, the standard Playwright HTML Report will show them as “Green,” potentially masking unresolved visual differences that require attention.

​To bridge this gap without forcing developers to switch context, we developed a Custom Applitools Reporter.

​This reporter extends the standard Playwright HTML report. It injects the actual visual status (Passed, Failed, or unresolved) directly into the test results view.

  • True Status: You see which tests have visual diffs, even if the Playwright exit code was successful.
  • Direct Links: It provides a direct link from the test report to the specific batch results in the Applitools Dashboard.
  • Context: It enriches the report with UFG render status and batch information.

​This ensures you get the best of both worlds: The optimization of a “Green” CI run (no retries), with the transparency of a report that highlights exactly where manual review is needed.

Summary

​The Applitools Playwright fixture is designed to be non-blocking and scalable. By leveraging asynchronous architecture and Applitools UltraFast Grid, we offload the heavy lifting from your CI. By correctly configuring failTestsOnDiff, you ensure that your pipeline reflects your team’s engineering culture—whether that’s strict gating or modern, PR-based visual review.

Quick Answers

What is visual regression testing in Playwright

Visual regression testing in Playwright verifies that changes to an application’s UI do not introduce unintended visual differences. Playwright can perform basic visual regression checks using screenshot comparisons like toHaveScreenshot, while dedicated visual testing tools (such as Applitools Eyes) extend this by detecting meaningful UI changes, managing baselines, and enabling review workflows for approving visual updates.

What is the best way to do visual testing in Playwright?

Playwright supports basic visual testing through screenshot comparisons such as toHaveScreenshot, but this approach can become difficult to maintain at scale. Dedicated visual testing tools, like Applitools Eyes, extend Playwright by adding Visual AI comparison, cross-browser rendering, and review workflows that allow teams to detect visual regressions without maintaining large sets of screenshot baselines.

How does Playwright screenshot testing (toHaveScreenshot) compare to visual regression testing tools?

Playwright’s toHaveScreenshot performs pixel-by-pixel image comparisons against stored baseline images. While this works for simple cases, it often requires updating and maintaining many snapshots. Visual regression testing tools like Applitools Eyes use Visual AI to detect meaningful UI changes while ignoring insignificant rendering differences, provide review workflows to approve or reject visual changes, and allows custom match levels for different regions of the screen.

Can Playwright run visual tests across multiple browsers and devices

Yes, but with a limited scope. Natively, Playwright supports three browser engines (Chromium, Firefox, and WebKit), but it does not execute tests across different real operating systems or mobile devices. This lack of OS-level rendering limits coverage and imposes a risk of missing platform-specific visual bugs. For example, see how a frontend team caught a visual bug specific to Mac Retina screens that a standard engine check would miss.

How can you run cross-browser visual tests in Playwright without running tests multiple times?

Normally, cross-browser testing requires executing the same tests separately for each browser configuration. Tools like Applitools Ultrafast Grid allow tests to run once while visual rendering is executed across multiple browsers and viewport combinations in parallel. This removes the need to multiply test execution across the full browser matrix.

Why is cross-browser testing in Playwright so slow?

Natively, cross-browser testing introduces a significant performance penalty. Playwright must execute the entire test logic (clicks, waits, network requests) separately for every browser and viewport configuration. Modern visual testing tools (e.g., Applitools Ultrafast Grid) eliminate this overhead by executing the test logic just once locally, performing the cross-browser rendering and visual comparison in parallel in the cloud.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Test Your Components Where You Build with the Applitools Storybook Addon https://app14743.cloudwayssites.com/blog/test-your-components-where-you-build-with-the-applitools-storybook-addon/ Fri, 17 Oct 2025 11:07:00 +0000 https://app14743.cloudwayssites.com/?p=61542 Test Storybook components with Visual AI inside Storybook. Catch UI bugs early, bulk-maintain baselines, and scale cross-browser coverage.

The post Test Your Components Where You Build with the Applitools Storybook Addon appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Local dev is where most UI changes happen (and where regressions sneak in). States drift, styles diverge, and tiny tweaks pile up until something breaks in CI. The Applitools Storybook Addon brings AI-powered visual testing straight into Storybook so you can catch issues as you code, approve the good changes quickly, and keep your CI/CD pipelines green.

AI-Powered Visual Testing Inside Storybook

Open your Storybook and run visual tests from an Applitools Eyes tab – no context switching. Results are grouped by component to mirror your Storybook structure, and a reporter widget highlights what needs attention first so you can review diffs in minutes, not hours. Learn more on our Storybook Component Testing with Applitools page.

  • Catch bugs where you build. Validate component states during local development and avoid surprises later.
  • Review faster with Visual AI. See only meaningful, human-perceptible UI changes without pixel-to-pixel noise. Tune sensitivity with AI match levels when you need to.
  • Scale coverage painlessly. Run once; render everywhere with Ultrafast Grid across browsers, devices, and viewports in parallel.

How to Use the Applitools Eyes Storybook Addon

Getting started takes just a couple of minutes.

  1. Install the SDK & Addon
    Add Applitools Eyes to your project and enable the Storybook addon (React, Vue, Angular supported). See the installation instructions in the Eyes Storybook Addon docs.
  2. Run Applitools Visual Tests in Storybook
    Open Storybook, switch to the Applitools Eyes tab, and trigger tests for a single story or an entire component. Results stream back in real time with automatic grouping by component.
  3. Review & Maintain
    Use Visual AI diffs, side-by-side views, and auto-maintenance to approve or reject changes in bulk. Prioritized sorting surfaces what needs attention first.
  4. Scale Across Browsers/Devices
    Turn on Ultrafast Grid to parallelize renders across Chrome, Firefox, Safari, Edge, and mobile sizes – without extra local setup.

Applitools Storybook Addon Use Case Playbook

Below are the three most common ways teams use the Eyes Storybook Addon each – with a quick, practical flow pulled right from the product.

Use Case: Guard Your Design System

As you refactor tokens or update themes, run visual tests on every component state. Spot unintended changes across the library instantly.

How to do it in Storybook

  1. Start Storybook and open your design‑system component in the Applitools Eyes tab.
  2. Click Run from the tab (or use Run in the left sidebar test module). The addon tests the stories and streams results inline for every browser/device in your applitools.config.js.
  3. In the sidebar, filter by Unresolved to zero in on changes across the library (Green = passed, Orange = unresolved, Red = failed).
  4. Open a story’s result and use Side‑by‑Side or the Slider to spot subtle spacing/typography diffs.
  5. Approve legit theme updates with Thumbs Up (or use ⋯ → Review actions to approve the whole story/batch). Reject regressions with Thumbs Down and fix.

Pro tip: Use the tab ⋯ → Configuration to confirm you’re validating the right browser matrix and server URL. See more options in the docs.

Use Case: Fix Fast During Local Dev

Working on a feature branch? Validate your component in Storybook before you commit.

How to do it in Storybook

  1. Open your feature’s stories, then hit Run in the Applitools tab for the component you’re touching.
  2. Watch statuses update inline; click the status buttons to filter to Unresolved so you only look at what changed.
  3. Click into any row to open compare tools: Diff Image, Actual Image, Expected Image, Side‑by‑Side, or Slider.
  4. If the change is intended, Thumbs Up to approve; otherwise Thumbs Down to flag and keep iterating.
  5. When you’re happy locally, push your branch. You can scale the same setup in CI using your existing Storybook build/preview URL.

Heads‑up: To view baselines or approve/reject, sign in to your Applitools account in the same browser that’s running Storybook (you’ll be prompted if not).

Use Case: Ship Multi‑Browser Confidence

One click, many targets. Validate layout and responsive behavior across browsers and viewports – early.

How to do it in Storybook

  1. In ⋯ → Configuration, verify your browsers/devices list (Chrome, Firefox, Safari, Edge; add viewports you care about).
  2. Hit Run for representative stories (states, theming, interactive). Results come back grouped by each browser/device so differences are obvious.
  3. Filter the sidebar by Unresolved and scan. Use Side‑by‑Side or Slider to compare layout at different sizes.
  4. Approve good changes in bulk (⋯ → Review actions) to keep maintenance low.
  5. For broader coverage, run the same setup in CI and expand the matrix.

Why Visual AI > Pixel Diffs for Storybook

Pixel-to-pixel tools are fragile with dynamic content and minor rendering differences. Applitools Visual AI mimics human vision to highlight only meaningful UI changes (structure, layout, content) while ignoring the noise. You can still dial sensitivity up or down with match levels whenever needed. Less flake, more signal.

Try AI-Powered Visual Testing in Storybook Today

Run your first component tests in minutes, review diffs right in Storybook, and expand coverage with Ultrafast Grid – without slowing delivery.

Frequently Asked Questions

What does the Applitools Storybook Addon do?

It runs Applitools visual tests from inside Storybook. You can trigger tests per story or component, then review results and diffs inline with automatic grouping that mirrors your Storybook tree.

Do I need to write tests with the Applitools Storybook Addon?

With the Applitools Storybook Addon, you existing stories become the tests.

How is the Applitools Storybook Addon different from Chromatic visual tests?

Applitools’ Visual AI detects signficant visual differences instead of only pixel-to-pixel comparisons. This means you see fewer false positives and spend less time on maintenance.

Appitools also lets you auto-maintain hundreds of tests at once (when you do need to perform test maintenance), run them across multiple browsers and devices instantly, and manage everything in the same platform that’s also running your Playwright and Cypress end-to-end test flows. See our Applitools vs. Chromatic comparison page for a deeper breakdown.

What about performance and CI stability?

Validate locally in Storybook to prevent CI failures. When you’re ready, run the same tests in CI and render broadly with Ultrafast Grid – fast and consistent.

Do I need an Applitools account to use the Storybook Addon?

Yes. You’ll need an active Applitools Eyes account and an API key to use the Applitools Storybook Addon.

The post Test Your Components Where You Build with the Applitools Storybook Addon appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin https://app14743.cloudwayssites.com/blog/figma-design-testing-applitools-plugin/ Mon, 13 Oct 2025 22:00:00 +0000 https://app14743.cloudwayssites.com/?p=61531 Use the Applitools Eyes Figma Plugin to test and compare designs against your live app. Catch visual changes early to confirm UI accuracy.

The post Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Applitools Eyes Figma plugin on top of a blurry Figma frame

Even the best design systems can fall short when a layout moves from Figma to code. Fonts shift, buttons resize, and colors look a little off. These small issues result in visual drift and long review cycles between design, development, and QA.

Figma design testing with Applitools Eyes closes that gap. Export Figma frames directly to Eyes to compare what you designed with what you built using the same visual testing tools your QA teams already trust.

Design-to-Code Testing in One Place

The plugin lets you send Figma frames, including individual components, pages, or entire prototypes, straight into Applitools Eyes. Each exported frame becomes a visual baseline, the same kind used in automated tests.

Developers can run their regular visual tests against these baselines to confirm that what they’ve built matches the approved design. Meanwhile, Designers can export each new version of a design to see what changed between iterations. Everyone reviews results in the same Eyes dashboard, where visual differences appear side by side.

This shared view reduces guesswork and keeps teams aligned around what “correct” actually looks like.

How to Use the Applitools Eyes Figma Plugin

Getting started takes just a couple of minutes.

1. Install the Plugin

Open the plugin from the Figma Store, or open the Figma desktop app and select Plugins → Manage Plugins → Search “Applitools Eyes” → Install.

Applitools Eyes Figma plugin on top of a blurry Figma frame

2. Connect Your Applitools Account

Launch the plugin and enter your Applitools API key and server URL (default: https://eyes.applitools.com). These settings are saved for future use.

3. Select Figma Frames to Export

You can export a single frame, multiple frames, or a full design. The plugin automatically names them based on your Figma file, or you can customize names with dynamic parameters like {figma_filename}, {figma_page}, or {figma_frame}.

4. Adjust Settings

Optional configurations include:

  • Match level: strict, dynamic, layout, ignore colors, exact, or none
  • Contrast level: accessibility comparison thresholds
  • Auto-accept baselines: mark first exports as approved
  • And more…

5. Export and Review

Lastly, click Export to Eyes to send your selections to Applitools. Frames appear in the Eyes dashboard under the “Figma” environment. Designers and Devs can view differences directly and decide whether to accept or reject them.

Figma plugin overlayed on screenshot of Applitools Eyes comparing a Figma frame and visual test in Chrom

Three Use Cases for QA Teams

1. Design-to-Implementation Validation

Once designs are uploaded, developers can link automated tests to the same baseline using the “baseline environment name” provided by the plugin. When they run their tests, Eyes compares the live UI against the design reference.

Result: Teams catch spacing, text, or layout differences before they reach production.

2. Design-to-Design Version Comparison

Designers often revisit earlier layouts or explore small variations. Exporting both versions to Eyes highlights the exact visual differences, making it easy to review and choose the preferred version.

Result: Faster review cycles and fewer overlooked design changes.

3. Shared Visual Baselines for Collaboration

Designers, developers, and QA teams can all access the same Eyes dashboard. Instead of passing screenshots or notes, they can comment on the same visual checkpoints.

Result: Clearer handoffs and fewer miscommunications between design and engineering.

Why Visual Testing from Design to Code Matters

Designs are often reviewed visually, while code is tested functionally. However, the Figma plugin connects these two disciplines by giving both teams a consistent, visual source of truth.

For designers, it’s a way to confirm that their layouts are faithfully implemented without manually comparing screenshots. The plugin provides a reference that removes ambiguity about spacing, colors, or typography for developers. For QA teams, it introduces an additional layer of confidence that each release matches approved specifications.

This integration fits naturally into existing workflows: designs are exported once, developers test as usual, and visual checks happen automatically. What was once a manual review step becomes part of the team’s regular quality process.

Try Design-to-Code Testing for Yourself

The Applitools Eyes Figma Plugin brings visual testing into the design process, helping teams maintain consistency from mockup to release. It’s a straightforward way for design and development to share one accurate reference for how an interface should look in order to reduce manual review and give everyone confidence that what’s shipped matches what was designed.

Install the Applitools Eyes Figma Plugin and start validating your designs before code ships.

Frequently Asked Questions

What is the Applitools Eyes Figma Plugin?

The Applitools Eyes Figma Plugin lets you export frames from Figma into Applitools Eyes for visual testing. It helps teams compare their designs against live implementations or across design versions, ensuring the final product matches what was originally designed.

Why should I use the Applitools Eyes Figma Plugin?

The main reasons teams like using the plugin include:
– Detecting visual differences early in development
– Maintaining design consistency from mockup to production
– Reducing manual screenshot comparisons
– Providing a shared visual reference for design, QA, and development teams

How does Figma design testing work with Applitools?

Figma design testing with Applitools works by turning design frames into visual baselines inside the Eyes dashboard. Developers then run automated tests that capture the built UI and compare it to those baselines, highlighting any visual differences between design and implementation.

Can I compare two Figma designs using the plugin?

Yes. You can export two or more design versions to Applitools Eyes and compare them visually. The dashboard highlights differences such as layout changes, spacing updates, or color tweaks, making it easier to review design revisions before sign-off.

Do I need an Applitools account to use the Figma Plugin?

Yes. You’ll need an active Applitools Eyes account and an API key to export Figma frames to Eyes. Once connected, you can reuse your credentials for future exports.

The post Validate Your Figma Designs Before Code Ships with the Applitools Eyes Plugin appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
AI Test Automation Platform for Developers: Why Applitools Won in 2025 https://app14743.cloudwayssites.com/blog/ai-test-automation-platform-developer-perspective/ Tue, 17 Jun 2025 12:48:15 +0000 https://app14743.cloudwayssites.com/?p=60781 Applitools was named 2025 AI Test Automation Platform of the Year—not for hype, but for helping developers scale testing with Visual AI and real engineering speed.

The post AI Test Automation Platform for Developers: Why Applitools Won in 2025 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
AI Test Automation

Applitools was named CIO Review’s 2025 AI-Powered Test Automation Platform of the Year—not because we chased buzzwords, but because the platform is fundamentally AI-native, built for engineering scale, and designed to help developers test smarter without slowing down.

For developers, testers, and QA engineers, the award reflects what actually matters:

  • Reducing test flakiness
  • Automating visual and functional checks in parallel
  • Scaling test execution across browsers and devices
  • Plugging into CI/CD pipelines without disrupting existing workflows

Let’s break down what makes this platform stand out from a developer’s perspective.

AI-Native Testing, Not Bolt-On AI

Applitools isn’t a traditional test framework with AI sprinkled on top. It’s purpose-built to use Visual AI plus code-aware intelligence for smarter test coverage. That means:

  • You can catch regressions that DOM diffs would miss
  • You write fewer assertions, yet spot more visual and layout issues
  • You reduce false positives and test flakiness—without relying on brittle selectors

It’s AI-native automation that understands what the user sees, not just what the code renders.

Built for Real Engineering Workflows

Applitools supports every major language and framework, including:

  • Languages: JavaScript, TypeScript, Java, Python, C#, Ruby
  • Frameworks: Cypress, Playwright, Selenium, WebdriverIO, and more
  • Mobile: Appium and native frameworks

You don’t need to rip and replace. Applitools plugs directly into your current test suite with minimal setup and no test rewrites required.

Ultrafast Grid = Multi-Platform Testing Without the Bottlenecks

You run your tests once. Applitools executes them across dozens of browser, OS, and device combinations in parallel—via the Ultrafast Grid, not your CI or local machine.

That means:

  • Fast, scalable cross-browser coverage
  • Smart DOM diffing combined with Visual AI
  • Consistent UX testing across breakpoints and devices

No emulators. No stitched screenshots. Just reliable results, fast.

Seamless CI/CD Integration

Applitools fits natively into DevOps workflows with:

  • GitHub Actions, GitLab, Jenkins, CircleCI, Azure Pipelines, Bitbucket, TeamCity
  • Rich CLI tooling for custom pipelines
  • Git-based test baselines and approval workflows
  • Smart diffing and auto-approvals to keep noisy builds out of your way

For more, explore our Integrations Hub.

This is test automation that moves with your code, not one that slows it down.

Dev Teams Are Reporting…

Here’s what teams have seen after adopting Applitools:

  • Up to 80% reduction in test maintenance overhead
  • 10x faster execution across browsers and devices
  • 70% fewer visual bugs escaping into production
  • Faster code reviews with fewer test-related delays

Whether you’re validating a single feature branch or running thousands of tests in parallel, Applitools is built to support real scale—without compromising on accuracy.

Why This Award Actually Matters

The CIO Review award isn’t about hype. It’s a reflection of what forward-looking engineering teams need from test automation in 2025—More confidence. Less friction. AI that works.

If you’re building modern apps, you deserve modern testing. Applitools gives you a platform that evolves with your code, scales with your team, and delivers confidence without the test fatigue.


Applitools Resources for Developers


Quick Answers

How does Applitools reduce test flakiness in UI automation?

Applitools leverages Visual AI to detect meaningful visual changes, minimizing false positives caused by minor rendering differences. This approach reduces test flakiness and maintenance overhead, allowing developers to focus on actual issues rather than debugging unstable tests.

Can Applitools integrate with my existing test frameworks and CI/CD pipelines?

Yes, Applitools offers seamless integration with popular test frameworks like Selenium, Cypress, Playwright, and Appium. It also supports CI/CD tools such as Jenkins, GitHub Actions, and CircleCI, enabling you to incorporate visual testing into your existing workflows without significant changes. See the integrations.

What is the Ultrafast Grid, and how does it benefit cross-browser testing?

The Ultrafast Grid is Applitools’ cloud-based testing infrastructure that allows you to run visual tests across multiple browsers and devices in parallel. This accelerates cross-browser testing and ensures consistent user experiences across different platforms.

How does Applitools handle dynamic content in applications?

Applitools’ Visual AI intelligently distinguishes between meaningful visual changes and dynamic content variations. It can ignore expected dynamic elements like timestamps or user-specific data, focusing only on unexpected differences that may indicate bugs.

Is coding expertise required to create tests with Applitools?

While Applitools integrates well with code-based test frameworks, it also offers no-code and low-code options through its Autonomous platform. This allows team members with varying technical skills to create and maintain tests, promoting broader collaboration in the testing process. See how Applitools expands test automation across teams.

The post AI Test Automation Platform for Developers: Why Applitools Won in 2025 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Visual, Functional, and Autonomous Testing—All in One https://app14743.cloudwayssites.com/blog/visual-functional-autonomous-testing-all-in-one/ Fri, 23 May 2025 14:47:55 +0000 https://app14743.cloudwayssites.com/?p=60594 Applitools combines proven Visual AI, intelligent test automation, and a scalable platform to help teams ship with speed and confidence. Here’s how.

The post Visual, Functional, and Autonomous Testing—All in One appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
One Platform. Three Testing Superpowers.

TL;DRApplitools brings visual, functional, and autonomous testing together in a single AI-powered platform. Backed by 11+ years of refinement and a dataset of 4 billion real-world images, our Visual AI delivers unmatched accuracy and reliability for enterprise-grade software testing.

Testing today isn’t just about coverage—it’s about confidence, speed, and scaling quality across teams. Whether you’re a developer chasing faster feedback, a QA lead reducing maintenance overhead, or a product owner focused on release velocity, Applitools helps modern teams deliver software that looks right, works right, and evolves with ease.

Here’s how Visual, Functional, and Autonomous Testing all come together in one powerful platform.

Trusted Visual AI with Proven Accuracy

Applitools sets the standard in Visual Testing. Our Visual AI engine delivers 99.9999% accuracy, eliminating false positives and catching bugs others miss.

  • 5.8x more efficient than pixel-based tools
  • Detect both functional and visual bugs in a single test
  • Works with all major frameworks: Selenium, Cypress, Playwright, and more

We didn’t just add AI—we’ve spent 11+ years perfecting it.

A Complete Platform for End-to-End Testing

Applitools goes far beyond screenshots. Our Intelligent Testing Platform includes Autonomous Test Creation, Visual Validation, Cross-Browser + Device Testing, and Accessibility Testing—all in one cloud-based solution.

  • Run tests across browsers, devices, and screen sizes in parallel
  • Built-in accessibility and compliance testing
  • Fully scalable with enterprise-grade performance

Less Test Maintenance with Self-Healing, Smart Grouping & Predictive Analytics

Spend less time fixing broken tests and more time delivering value. Applitools minimizes test upkeep so your team can focus on building.

Collaborative Testing: How Developers, PMs, Designers & Marketers All Work Smarter with Applitools

Testing shouldn’t be a bottleneck—or limited to just QA. Applitools empowers developers, designers, product managers, and even marketers to collaborate with ease.

  • Intuitive UI for reviewing results and managing baselines
  • Seamless sharing of results and issue tracking
  • Codeless and code-based authoring, no deep technical expertise needed

More than a Decade of AI Leadership

AI isn’t new to us—it’s the foundation of our platform. Unlike newer tools making AI promises, we’ve been building, training, and refining Visual AI to solve real testing challenges at scale for more than a decade.

Seamless Integrations & Dev Experience

Great testing fits into your workflow—not the other way around. Our AI-powered test automation works with your tools, languages, and CI/CD pipelines to scale quality without slowing you down. Applitools integrates with:

  • Every major framework: Selenium, Cypress, Playwright, Puppeteer, WebdriverIO
  • CI/CD tools: GitHub Actions, Jenkins, GitLab, Azure DevOps
  • SDKs for Java, JavaScript, Python, C#, and more

Whether you’re in code or no-code workflows, we plug into your stack and scale with you.

24/7 Support That Doesn’t Disappear

Whether you’re mid-sprint or troubleshooting a release, help is always within reach. Get expert guidance anytime—no hoops, no waiting.

  • Around-the-clock global technical support
  • Extensive documentation, how-tos, and real-time guidance
  • Active community forum and dedicated Customer Success Managers (not just for enterprise)

Compare that to competitors with limited support, slow response times, or no dedicated resources unless you’re a top-tier customer.

Smart Investment, Real Value

Our pricing is flexible, predictable, and scales with your needs. You’ll see ROI fast:

  • Save hours of test maintenance per sprint
  • Eliminate manual bug hunts and false positives
  • Deliver faster releases without compromising quality

Explore our current pricing structure, or speak with a testing specialist to build a package that’s right for your team.

“We reduced our testing time from days to hours. Applitools changed how we think about QA.”
— QA Lead, Global Retail Brand

Visual, Functional, and Autonomous TestingThe Applitools Advantage

We combine Visual AI, Autonomous Testing, and a developer-friendly platform into one powerful, scalable solution. With Applitools, your team gets:

  • Smarter test creation
  • Less maintenance
  • Better collaboration
  • Faster releases
  • And trusted results every time

See What’s New with Applitools Autonomous and What’s Coming with Applitools Eyes

Ready to Test Smarter?

In a crowded automation landscape, it’s not enough to have “AI-powered” features. You need real results. With over a billion visual tests run and trusted by leading enterprises across industries, Applitools isn’t experimenting with AI—it’s already delivering.

Whether you’re starting fresh or looking to scale smarter, Applitools gives your team the tools to automate with confidence and speed.

Ready to see it in action? Start your free trial, book a personalized demo, or explore the platform today.

Applitools helps you test like it’s 2025. Join the world’s top teams already doing it.

Quick Answers

What is the “Intelligent Testing Platform” offered by Applitools?

Applitools’ Intelligent Testing Platform merges Visual AI, Autonomous Test Creation, cross-browser/device testing, and accessibility/compliance validation—all in one cloud-based solution. It enables teams to test comprehensively while minimizing maintenance and scaling efficiently.

How does Applitools reduce maintenance overhead in test automation?

The platform includes self-healing locators, root cause analysis, smart grouping, and predictive analytics. These features automatically adapt tests to UI changes and make debugging smoother—meaning less flaky tests and less time spent on manual test upkeep.

Who can benefit from using Applitools beyond just QA engineers?

Applitools supports developers, designers, product managers, and marketers, not only QA. A user-friendly interface allows easy sharing of results and issue tracking. Additionally, you can author tests using both codeless and code-based methods—so even non-technical team members can participate effectively.

Who uses Applitools, and how has its AI been developed?

Applitools has been training and developing its AI models for over 11 years, using a dataset of more than 4 billion images from real applications. Today, the platform is trusted by 400+ enterprise customers across industries including finance, retail, media, B2B tech, and healthcare. This breadth of usage ensures highly accurate, production-grade AI for visual and functional testing at scale.

The post Visual, Functional, and Autonomous Testing—All in One appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Leveraging Applitools for Seamless Visual Testing in Playwright https://app14743.cloudwayssites.com/blog/leveraging-applitools-for-seamless-visual-testing-in-playwright/ Fri, 31 Jan 2025 21:29:23 +0000 https://app14743.cloudwayssites.com/?p=59583 As applications become more complex and UI consistency becomes critical, ensuring that the user interface appears as expected across multiple environments is key. Applitools Eyes, when integrated with the Playwright...

The post Leveraging Applitools for Seamless Visual Testing in Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

As applications become more complex and UI consistency becomes critical, ensuring that the user interface appears as expected across multiple environments is key. Applitools Eyes, when integrated with the Playwright SDK, provides a powerful, efficient, and streamlined approach to visual testing in your Playwright tests. The combination of Playwright, a popular end-to-end testing framework, and Applitools Eyes, a visual AI-powered testing tool, makes visual validation easier, faster, and more scalable.

Applitools Eyes is a visual AI-powered testing tool that helps address these UI consistency challenges. It uses advanced AI-driven image comparison to detect visual differences. Unlike traditional visual testing tools that perform pixel-by-pixel image comparisons, Applitools Eyes uses AI algorithms to determine if the visual differences are actual bugs.

Applitools Playwright SDK introduces several improvements designed to streamline the process of setting up and running visual tests, making the experience more efficient and user-friendly.

In this blog, we will discuss when it is appropriate to use visual testing with Playwright and which cases will be suitable for using Applitools.

Overview of Visual Testing

Visual testing is a technique performed to ensure the visual appearance of a given website or application matches the provided design and layout . This includes the comparison of the actual UI against a standard image or a reference UI image. This type of testing primarily focuses on detecting any visual anomalies, which include alignment issues , font problems, color discrepancies , images, or structural shifts that can disrupt the user experience.

How Visual Testing Differs from Functional Testing

Visual testing and functional testing are both crucial if the goal is to deliver a high-quality application; visual testing checks the visual appeal of the application, while functional testing verifies how the application functions.

Let’s see some major differences between visual and functional testing.

AspectVisual TestingFunctional Testing
PurposeEnsures the layout and design of the UI look as expected.Verifies that the application’s functions work as intended.
ScopeFocuses on UI elements’ appearance (position, dimensions, colors, fonts, etc.).Focuses on the application’s behavior, verifying logic, workflows, and system responses.
ApproachUses image comparison tools to detect visual discrepancies.Uses test scripts or user simulations to validate functionality.
TechniquesRelies on image or screenshot comparison against baseline images.Involves input/output verification, user interaction simulation, and data validation.
ToolsTools like Playwright, Cypress, and other functional testing frameworks.Tools like Playwright,Cypress, and other functional testing frameworks.
Issues DetectedIdentifies layout errors, pixel misalignments, broken images, wrong colours, or responsive design bugs.Identifies logical errors, broken workflows, malfunctioning features, or incorrect data processing.
Use CaseSuitable for detecting unintended UI changes during development.Suitable for validating features and ensuring correct application logic.

Before delving into how we can perform visual testing using Playwright and Applitools, let’s review some of the new improvements that Applitools has introduced .

What’s New In Applitools Playwright SDK?

The Applitools Playwright SDK is a library that integrates Applitools’ visual testing capabilities with the Playwright automation framework.

Let’s explore the improvements done by Applitools to streamline the process of setting up and running visual tests, making the experience more efficient and user-friendly. 

Here’s a detailed breakdown of what’s new in the updated SDK:

Test Fixtures:

  • Test fixtures Centralize and reuse setup code across multiple tests, making the setup for visual tests more consistent and reducing redundant configuration.
  • This feature is particularly valuable when running tests on multiple pages or scenarios, as it simplifies the initialization of Applitools Eyes in each test.

CLI Onboarding:

  • The Command-Line Interface (CLI) simplifies the initial setup process by automating configuration steps, helping users quickly integrate Applitools Eyes with Playwright. 
  • This is especially helpful for new users, reducing the complexity of the setup and enabling faster onboarding into visual test creation.

Config Object Setup:

  • The configuration object setup automates the insertion of Applitools Eyes settings into the Playwright configuration file (e.g., playwright.config.js or playwright.config.ts). 
  • This feature eliminates manual setup, reducing the risk of errors, ensuring accurate configuration from the outset, and saving developers valuable time.

Custom HTML Reporter:

  • The custom HTML reporter enhances Playwright’s default test report by integrating visual test results from Applitools Eyes. 
  • This allows developers to view a direct comparison between the baseline image and the current test screenshot, helping identify visual regressions more effectively. 
  • By adding visual results to the standard Playwright report, it simplifies the review process, offering a comprehensive overview of the test outcomes.

Before we deep dive into how Applitools helps in visual testing, let’s first see how we can perform visual testing using Playwright.

Visual Testing Using Playwright

Playwright comes equipped with tools to make visual testing effortless, allowing developers to take and compare screenshots of web pages or specific web page elements. 

Here’s how visual testing works in Playwright:

  1. First Run: Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.
  2. Subsequent Runs: In the next run, Playwright takes a new screenshot and compares it against the base image.
    • If there are no differences, the test passes.
    • If differences exist, the test fails, flagging the areas where changes occurred.

Prerequisites

Ensure the following tools are installed:

Node.js: Download and install from Node.js.

Visual Studio Code (Optional): Recommended IDE for coding.

Install Playwright:

npm i @playwright/test

Write the Simple Visual Regression Test Using Playwright
Create a new JavaScript file in your test folder, e.g., demo.spec.js.

Use the page.screenshot() method to capture the screenshot and the expect.toMatchSnapshot() assertion from the @playwright/test module to compare images.

const { test, expect } = require('@playwright/test'); 
test('Visual Regression Test Example', async ({ page }) => { 
   // Navigate to the website 
   await page.goto('https://playwright.dev'); 
   // Capture a screenshot of the page 
   const screenshot = await page.screenshot(); 
   expect(screenshot).toMatchSnapshot(); 
}); 

When we execute the above code for the first time, the test case fails with the below error, and Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.

When we execute the same test case again, it will pass.

Visual Regression Test with maxDiffPixels option 

There might be the case when there are some minor differences in the images and the test case starts failing; to handle this situation we have an option, the ‘maxDiffPixels option,’ that we can pass. It allows you to specify the maximum number of pixels that can differ between two images for the comparison to still be considered successful.

const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ maxDiffPixels: 100 });
}); 
const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ threshold:0.5} );
}); 

Visual Regression Test with Threshold option

Passing ‘threshold’ as an option is another option to avoid failing the test case. This is particularly useful for handling slight differences that occur due to rendering variations.

const { test, expect } = require('@playwright/test');
test('Visual Regression Test Example', async ({ page }) => {
  // Navigate to the website
  await page.goto('https://playwright.dev');
  // Capture a screenshot of the page
  const screenshot = await page.screenshot();
  expect(screenshot).toMatchSnapshot({ threshold:0.5} );
}); 

So far we have seen how we can automate the visual UI using Playwright. In the next section you will see how we can use Applitools Eyes and Playwright SDK together to automate visual testing.

Visual Testing using Applitools Eyes and Playwright SDK

Visual testing is a crucial aspect of modern software quality assurance. It ensures that a web application’s UI renders correctly across different devices, screen sizes, and browsers. Tools like Applitools Eyes and frameworks like Playwright SDK simplify this process by providing robust solutions for automated visual regression testing.

Below are the steps to set up Applitools Eyes and Playwright SDK.

Install Playwright

npm init playwright@latest

Install Applitools Eyes SDK 

npm install -D @applitools/eyes-playwright

Add the below line in .spec.js file 

To configure your project for Applitools Eyes, we have to add the below line.

import { test } from '@applitools/eyes-playwright/fixture';

In the next section you will see examples to explain how we can use SDK with Playwright tests. 

Before moving in the detail of various types of matchLeavel, let see one basic example to understand how we do the visual testing in Applitool,  and the method used in Applitools, to capture and validate a checkpoint in your application’s UI.

Below is code that integrates Applitools Eyes with Playwright to perform a visual test.

import { test } from '@applitools/eyes-playwright/fixture';
test('My first visual test Using Applitools with matchLevel: Dynamic', async ({ page, eyes }) => {
await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  // Visual check
  await eyes.check('Landing Page', {
    fully: true,
    matchLevel: 'Dynamic',
  });
});

In the above code we mainly use two things: one is the method eyes. check() and two options: {fully: true}, and {matchLevel: ‘Dynamic’}

eyes.check(): Performs a visual snapshot of the page or a specific element and compares it with the baseline image stored in Applitools.

Options:

  • fully: true:
    • Captures the entire page, including scrollable sections.
    • Ensures that all content on the page is included in the visual test.
  • matchLevel: ‘Dynamic’:
    • A matching algorithm that focuses on the content while ignoring dynamic changes, such as text or layout differences.
    • Useful for pages with frequently changing content, like user-generated text or dynamic data.

Below are the options that we can pass in our test case. These options provide flexibility and precision, enabling robust visual testing tailored to different scenarios and challenges.

This table summarizes each configuration, making it easier to understand their purposes.

OptionPurposeExample Use Case
matchLevel: ‘Dynamic’Ignores minor layout or content changes (e.g., dynamic text, animations).Testing pages with frequent dynamic content (e.g., dates, real-time updates) without raising false alarms.
matchLevel: ‘Strict’Detects even small pixel or layout changes for precise visual validation.Validating critical UI components or designs where every pixel matters (e.g., brand logos, product pages).
region: componentFocuses the visual check on a specific component or region of the page.Testing a new or modified UI component in isolation (e.g., buttons, headers, or form fields).
fully: trueCaptures and validates the entire page, including content outside the viewport.Ensuring the layout and content of a long webpage or scrolling section is consistent.
ignoreRegions: [dynamicContent]Excludes specific regions with dynamic content from the validation.Ignoring areas like ads, live feed sections, or widgets with fluctuating content (e.g., real-time graphs).

Examples with different matchLevel

MatchLevel determines how the captured visual output is compared with the baseline image.Let’s see the examples with different options.

1. Visual Testing with matchLevel: Strict

This ensures pixel-perfect comparison between the baseline and current screenshot, highlighting any visual difference, no matter how small.

test('My first visual test Using Applitools with matchLevel: Strict', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  await eyes.check('Landing Page', {
    fully: true,
    matchLevel: 'Strict',
  });
});
  • Purpose: Validates the entire page with Strict match level.
  • Strict Match Level: Identifies even small differences, such as minor pixel or layout changes, ensuring high precision in visual validation.

2. Visual Testing with matchLevel: Dynamic

Focuses on content consistency by ignoring minor layout differences, such as text movement, while ensuring key visual elements remain unchanged.

In the below example you can see even if the text color is changed after clicking on link ‘?diff2’ test case still works perfectly fine because we have set matchLevel: Dynamic. This ensures the visual test focuses on the content and structure of the text rather than superficial changes like color.

test('My first visual test Using Applitools with matchLevel: Dynamic', async ({ page, eyes }) => {
 await page.goto('https://app14743.cloudwayssites.com/helloworld/');
 await page.getByRole('link', { name: '?diff2' }).click();
 await eyes.check('Homepage', {
   fully: true,
   matchLevel: 'Dynamic',
 });
});
  • Purpose: Validates the entire page with Dynamic match level.
  • Dynamic Match Level: Ignores minor content/layout differences (e.g., text changes, animations) and focuses on high-level structural comparisons.
  • fully: true: Captures a screenshot of the full page, not just the visible viewport.

3. Visual Testing for a Specific Region/Component

Limits visual comparisons to a particular area or component of the application.

test('My first visual test Using Applitools with particular region/Component', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  const component = page.locator('.fancy.title.primary');
  await eyes.check('My Component', {
    region: component,
  });
});
  • Purpose: Tests only a specific component or region on the page.
  • region: Focuses the validation on the area defined by the component locator (.fancy.title.primary), instead of the entire page.

4. Visual Testing with Ignored Regions

Excludes specific areas from comparison, preventing false positives caused by expected dynamic changes in those regions.

test('Visual test Using Applitools with ignoring the region', async ({ page, eyes }) => {
  await page.goto('https://app14743.cloudwayssites.com/helloworld/');
  const dynamicContent = page.locator('.fancy.title.primary');
  await eyes.check('Homepage', {
    fully: true,
    matchLevel: 'Strict',
    ignoreRegions: [dynamicContent],
  });
});
  • Purpose: Tests the page while ignoring specific dynamic regions.
  • ignoreRegions: Excludes certain areas from the visual validation (e.g., areas with dynamic content like dates, ads, or animations).
  • fully: true: Captures the entire page for validation.

About dynamic match level

Dynamic match level is a feature in Applitools Eyes that verifies text by matching it against predefined or custom patterns, ensuring the content adheres to a specific format. It focuses on format validation rather than content changes, making it ideal for dynamic text like dates, emails, or numbers.

Type of dynamic match level

Dynamic match levels allow for flexible verification of text by matching it against predefined or custom patterns. The default types include:

  • TextField: Text inside input fields.
  • Number: Numeric values like ZIP codes or phone numbers.
  • Date: Validates text as a date in proper format.
  • Link: Hyperlinks or URLs.
  • Email: Checks for a valid email address format.
  • Currency: Recognizes monetary values.

These types ensure accurate validation by focusing on the format rather than the content changes, such as dynamic updates to dates or numbers.

Custom dynamic pattern

To create a custom dynamic pattern we have to follow below steps in Applitools dashboard 

Here are the steps to create a custom dynamic pattern:

  1. Open the Settings Window: In the Page Navigator, select Apps & Tests.

In the list of applications on the left, hover over an application and click > Settings.

  1. Add a Custom Type: Click Add custom type.

  1. Define the Custom Pattern:

Enter a name and a Regex pattern.

  1. Save the Custom Pattern:

In the below screenshot you can see we have set a custom pattern for Zip Code. Once the custom pattern is created, its name can be used in the code to reference the pattern.

Click Add will save the entered pattern. 

  1. Apply the Settings:

Once all required patterns have been selected, click Apply settings.

Applitools Dashboard: Execute visual testing using the Applitools Playwright SDK

Applitools’ Dashboard is a powerful interface designed to manage and analyze test results efficiently, especially for visual testing and monitoring user interfaces.

Below are the steps that we normally have in the Applitools Dashboard when we execute any visual test case.

Precondition : Run the below command to export the key 

export APPLITOOLS_API_KEY='your_api_key_here'

Once the key is exported, the next step is to execute the test cases using the below command.

npx playwright test --ui tests/visual.spec.js

First Run: Playwright captures and saves a screenshot of the page or specific UI elements. This is known as the base image.

Subsequent Runs: In the next run, Playwright takes a new screenshot and compares it against the base image.

  • If there are no differences, the test passes.
  • If differences exist, the test fails, flagging the areas where changes occurred.

In the below screenshot in the Applitools Dashboard, you can see all the above four test cases are executed successfully.

Applitools Playwright SDK with Example in Detail

Let’s take an example of a particular component in the page. In the below screenshot, the component focuses on a specific component or region of the page. You can see only the ’Hello World’ part is tested. In case there is any change in this component, the test case will fail.

In the below example we have updated the component with the text ‘Happy World’ 

test('Visual test Using Applitools with Updating Component UI', async ({ page, eyes }) => {
await page.goto('https://app14743.cloudwayssites.com/helloworld/');
await page.getByRole('link', { name: '?diff2' }).click();
await eyes.check('Homepage', {
  fully: true,
  matchLevel: 'Strict',
 });
});

When we execute the above code, it will not execute successfully because the UI of the component is changed after clicking on the link ‘diff2.’

To fix the above problem, we have marked it resolved from the Applitools Dashboard. Once we accept the change, it becomes the base image.

Now when we execute the test case again, it gets executed successfully with the updated UI.

When Playwright’s Visual Testing Features Are Useful 

Playwright’s built-in visual testing features are well-suited for scenarios where:

Simple Visual Comparisons:

  • Comparing entire pages or specific UI components with baseline images for detecting pixel-level differences.
  • Example: Ensuring the homepage layout hasn’t shifted after a CSS update.

Static UIs:

  • Testing applications with minimal dynamic or animated content, as these are less prone to false positives caused by animations or transient elements.

Fast Feedback:

  • When quick pixel-by-pixel comparisons in CI pipelines are sufficient without advanced AI-powered analysis.
  • Example: Smoke testing in a fast-moving development cycle.

Budget-Friendly:

  • Playwright’s native screenshot and comparison capabilities don’t require additional tools or subscriptions.

When Playwright’s Visual Testing May Fall Short

Dynamic Content:

  • UIs with dynamic or frequently changing elements (e.g., time, randomized content, or animations) can cause false positives due to exact pixel mismatches.
  • Example: A dashboard with live-updating graphs.

Complex Visual Changes:

  • Playwright’s threshold-based comparison may miss subtle or semantic changes that don’t exceed the defined pixel difference threshold.
  • Example: Slightly altered typography or colour changes.

Ignored Regions:

  • While Playwright allows element-specific testing, ignoring specific dynamic regions within a larger page requires custom logic.
  • Example: Excluding advertisements or live widgets from visual comparison.

How Applitools Handles Situations Better

Applitools offers superior visual testing capabilities compared to Playwright, especially when handling dynamic content, ensuring pixel-perfect validation, focusing on specific components, and reducing false positives. 

Dynamic Content Handling:

  • Match Levels: Applitools’ AI-powered matchLevel options (e.g., Strict, Layout, Content) allow intelligent comparison by focusing on structure and ignoring irrelevant pixel differences.
  • Example: Testing a real-time dashboard with dynamic data but a consistent layout.

Ignored Regions:

  • Applitools allows you to define ignored regions to exclude specific parts of a page from comparison (e.g., ads or timestamps).
  • Example: Excluding a “current time” widget from visual testing.

Advanced Visual Validation:

  • Features like dynamic regions and AI-based semantic analysis help detect visual regressions that go beyond pixel-by-pixel differences.
  • Example: Identifying a button misalignment that might not be detected by Playwright’s threshold settings.

Visual Testing Across Components:

  • Allows you to target specific regions or components for testing without manually cropping or capturing screenshots.
  • Example: Testing the styling of a navigation bar or modal dialog.

Conclusion

Playwright’s built-in visual testing features are suitable for simple cases where you need pixel precision in image comparisons of static UIs. However, it may struggle with complicated designs, whether they involve minor changes within the site or more intricate design requirements .

Applitools Eyes, which employs artificial intelligence, is more suitable for overcoming these difficulties. It performs extraordinarily in cases with dynamic content; it easily identifies subtle visual regressions and is equipped with extra functionalities such as ignored areas and cross-browser testing. For teams with significant testing and enhancement requirements, where the built-in visual testing capabilities of Playwright may fall short for complex and large-scale application testing, Applitools serves as an invaluable complement. It ensures and elevates the visual quality of dynamic applications, making it an ideal choice for robust testing and quality assurance.

About the Author

Kailash Pathak (Applitools Ambassador | Cypress Ambassador)

Senior QA Lead Manager with over 15 years of experience in QA engineering and automation. Kailash holds certifications including PMI-ACP®, ITIL®, PRINCE2 Practitioner®, ISTQB, and AWS (CFL).

As an active speaker and workshop conductor, Kailash shares his expertise through blogs on platforms like Medium, Dzone, Talent500, The Test Tribe, and his personal site https://qaautomationlabs.com/


Quick Answers

How do I add Applitools Eyes to an existing Playwright project?

Follow the Playwright Quick Start to install the SDK, run CLI setup, and execute your first visual test (https://app14743.cloudwayssites.com/tutorials/playwright).

What’s the difference between Playwright functional checks and visual testing?

Functional asserts confirm behavior (clicks, responses, state), while visual testing validates the rendered UI—catching regressions in layout, fonts, colors, and dynamic content that functional checks overlook.

How do I run cross-browser visual tests quickly in CI?

Use the Ultrafast Grid to fan out visual checkpoints across real browsers/devices in parallel without maintaining an in-house grid (https://app14743.cloudwayssites.com/ultrafast-grid).

How do I reduce flakiness in Playwright tests?

Favor stable visual checkpoints over brittle locator assertions, use consistent viewports and network conditions, and rely on Visual AI to ignore non-material diffs (https://app14743.cloudwayssites.com/platform/validate/visual-ai/).

The post Leveraging Applitools for Seamless Visual Testing in Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Visual Testing? https://app14743.cloudwayssites.com/blog/visual-testing/ https://app14743.cloudwayssites.com/blog/visual-testing/#respond Wed, 07 Aug 2024 16:41:12 +0000 https://app14743.cloudwayssites.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, how to catch visual bugs, the differences between visual and functional testing, and how you can get started with automated visual testing today.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, front-end developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts—and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen—more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. Rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional tests cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count—various icons, and text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing—for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First-generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel-perfect testing”.

Sounds like a good idea, right?

What are the Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen sizes.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, likes, retweets, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI—specifically computer vision—for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and the intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functional tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-whites pace (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires regular changes in content and format. For example, media companies change their content as frequently as twice per hour and use AI-powered automated testing to isolate real errors that affect paying customers without impacting them. AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy let companies identify and fix problems associated with major DOM, CSS, and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar

Find out more about Applitools. Set up a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

Quick Answers

How does visual testing differ from functional testing?

Functional testing checks if features work as expected, while visual testing verifies that the UI displays correctly. Together, they ensure both the functionality and appearance of an application meet quality standards.

How does automated visual testing work?

Automated visual testing captures screenshots of the application’s UI and compares them against baseline images to detect visual differences. When changes are identified, the tool flags them for review, making it easy to spot unintended UI shifts. However, a tool like Applitools also incorporates AI to intelligently detect changes, distinguishing between acceptable design variations and real bugs.

Can visual testing help prevent regression issues?

Yes, visual testing helps prevent visual regressions by catching unintended UI changes after code updates. This ensures the UI remains consistent and functional across releases, reducing the risk of visual bugs reaching production.

What types of issues can visual testing detect?

Visual testing detects issues such as misaligned elements, missing images, font changes, and color discrepancies. It’s essential for maintaining design accuracy and preventing visual bugs that impact user experience.

Why should teams adopt visual testing as part of their quality assurance process?

Visual testing catches UI bugs that functional tests might miss, ensuring a high-quality, visually consistent user experience. Incorporating visual testing helps teams maintain design integrity and detect visual regressions early in development.

The post What is Visual Testing? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/visual-testing/feed/ 0
Recap: Building the Ideal CI/CD Pipeline https://app14743.cloudwayssites.com/blog/recap-building-the-ideal-ci-cd-pipeline/ Wed, 26 Jun 2024 12:56:00 +0000 https://app14743.cloudwayssites.com/?p=57117 Explore the limitations of traditional functional testing and learn how Visual AI testing can surpass these to achieve visual perfection in software development.

The post Recap: Building the Ideal CI/CD Pipeline appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In our recent webinar, Building the Ideal CI/CD Pipeline: Achieving Visual Perfection, we explored the transformative power of Visual AI testing for CI/CD pipelines. Aimed at software engineering managers and team leads, the session provided a deep dive into the limitations of traditional functional testing and how Visual AI testing can surpass these to achieve visual perfection in software development.

Technical Customer Success Manager Brandon Murray shared expert strategies and highlighted the benefits of integrating Visual AI testing, offering guidance on constructing the optimal CI/CD pipeline. He explored the intricacies of Visual AI testing, illuminating its critical role in enhancing software quality and performance.

Challenges in Traditional Functional Testing

Murray began by identifying the bottlenecks commonly encountered in traditional functional testing. These include:

  • High maintenance efforts
  • Slow feedback cycles
  • Limited UI coverage
  • Tedious manual testing

The Power of Visual AI Testing

Visual AI testing offers a revolutionary approach to overcome these challenges. By capturing screenshots and using AI to compare these snapshots to a baseline ‘golden image’, Visual AI testing ensures:

  • Reduced Test Development and Maintenance Time: Automating UI comparisons dramatically decreases the time spent on writing and maintaining tests.
  • Complete UI Coverage: Screenshots ensure that every aspect of the UI is tested, eliminating blind spots.
  • Enhanced Operational Efficiency: Faster feedback loops lead to quicker identification and resolution of issues, facilitating faster product releases.

Other Strategies to Supplement Visual AI Testing:

  • Self-Healing: Automatically corrects flaky tests by adjusting for locator changes, vastly improving test stability
  • Lazy Loading: Helps to ensure the entire page content is loaded
  • Parallel Test Execution: Enables the execution of multiple tests simultaneously, significantly speeding up the testing process

Integration into the Development Workflow

Integrating Visual AI testing into existing development workflows, particularly with pull request checks, is pivotal for agile environments. The webinar emphasized the importance of instant feedback for swift issue resolution, leading to accelerated development cycles.

Tools and Technologies Highlighted:

  • Cypress: Innovative testing framework for both developers and QA engineers
  • GitHub Actions: Continuous integration and continuous delivery (CI/CD) platform enabling automation directly in GitHub repositories
  • Figma Designs: Useful for collaborative design reviews and direct comparison against implementations

The session underscored the cost-effectiveness of using browsers on cloud infrastructure containers, especially when dealing with cross-browser coverage. Notably, the Filter Fast Grid was mentioned as an effective solution for this purpose.

Comparing Visual AI Testing to Traditional Methods

Attendees were eager to learn how Visual AI testing compares to snapshot tests and other traditional methods. The webinar demonstrated how Visual AI testing offers:

  • Greater Accuracy: By leveraging AI for pixel-perfect comparisons
  • Higher Efficiency: Through automated and parallel testing routes

In particular, using commodity CI solutions like GitHub Actions or CircleCI was recommended for their affordability and versatility.

Building the Ideal CI/CD Pipeline: Achieving Visual Perfection highlighted the transformative potential of Visual AI testing in optimizing CI/CD pipelines. Software engineering managers and team leads are strongly encouraged to evaluate how AI-powered tools like Applitools can elevate their testing processes, enhance product quality, and expedite delivery timelines. For those interested, a free trial of Applitools is available to experience the benefits firsthand.

The post Recap: Building the Ideal CI/CD Pipeline appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Recap: Leveraging AI for Mobile App Testing on Real Devices https://app14743.cloudwayssites.com/blog/recap-leveraging-ai-for-mobile-app-testing-on-real-devices/ Mon, 22 Apr 2024 14:24:21 +0000 https://app14743.cloudwayssites.com/?p=56813 Creating a flawless UI/UX experience across a myriad of devices and platforms is absolutely critical to the success of a business in today’s digitally dominated world. This task has become...

The post Recap: Leveraging AI for Mobile App Testing on Real Devices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Creating a flawless UI/UX experience across a myriad of devices and platforms is absolutely critical to the success of a business in today’s digitally dominated world. This task has become increasingly complicated with the swift advancement of mobile technologies and the ever-evolving expectations of users. 

Our recent webinar with Martin Kowalewski, Global Sales Engineering Lead at Applitools, and Frank Moyer, CTO of Kobiton, delved into these complexities, offering attendees a comprehensive look at the latest trends in mobile development and testing, with a special focus on the revolutionary impact of AI. 

The session introduced participants to the partnership between Kobiton and Applitools, showcasing a robust solution designed to meet the expansive testing needs of customers. This powerful integration leverages AI to provide a full spectrum solution for continuous testing across all platforms. We’ve highlighted a few of the key takeaways from the webinar below.

Revolutionizing Mobile App Testing with Visual AI Technology

The webinar kicked off by diving into the world of Visual AI technology, with a strong emphasis on its pivotal role in boosting the accuracy of UI and UX testing. Participants were provided with in-depth insights into how Visual AI is revolutionizing the field of software testing, offering examples of its application and the significant improvements it brings to the quality engineering process. Attendees left with a clear understanding of the transformative impact Visual AI has on enhancing user experience and interface design.

The conversation then shifted towards a comprehensive examination of Applitools’ Visual AI platform, showcasing its revolutionary approach to automating visual testing. This platform stands out by providing advanced monitoring and management tools specifically designed for the visual aspects of applications. By doing so, it sets the benchmark in the industry, offering an unparalleled level of precision and efficiency in detecting and addressing visual discrepancies, thereby significantly enhancing the quality assurance process for software developers.

The Power of Integrating Kobiton and Applitools for Enhanced Testing

The presentation emphasized the advantages of utilizing Kobiton for real device testing, particularly how Kobiton’s comprehensive mobile device cloud and on-premises device laboratories facilitate testing conditions that closely mimic real-world usage. By offering access to an extensive selection of mobile devices, Kobiton significantly improves the testing process. This access allows developers and testers to ensure their applications perform well across a diverse spectrum of devices, thereby enhancing user experience and satisfaction. Kobiton’s platform not only streamlines the testing cycle but also aids in detecting and resolving potential issues before release, making it an invaluable tool in the development process.

Streamlining Development: Practical Tips for Implementing Continuous Visual Testing

Finally, the session provided detailed practical advice on effectively integrating Applitools and Kobiton. This integration facilitates seamless, continuous visual testing, which is crucial for ensuring that applications render as intended on a variety of devices. By addressing this integration, developers can tackle one of the most significant challenges in mobile app development head-on, enhancing user experience by guaranteeing that apps look and function correctly across different devices and use scenarios. This approach not only improves the quality of mobile apps but also streamlines the development and testing process, making it more efficient.

The event highlighted the remarkable teamwork between Kobiton and Applitools, providing developers and testers with the necessary tools to excel in this highly competitive technology world. This collaboration has paved the way for enhanced testing capabilities, ensuring that applications are not only functional but also visually appealing, setting a new standard for success. Watch the full webinar here.

The post Recap: Leveraging AI for Mobile App Testing on Real Devices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Power Up Your Test Automation with Playwright https://app14743.cloudwayssites.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://app14743.cloudwayssites.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Modern Cross-Device Testing for Android & iOS Apps https://app14743.cloudwayssites.com/blog/cross-device-testing-mobile-apps/ Wed, 13 Jul 2022 20:47:15 +0000 https://app14743.cloudwayssites.com/?p=40383 Learn the cross device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

The post Modern Cross-Device Testing for Android & iOS Apps appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Learn the cross-device testing practices you need to implement to get closer to Continuous Delivery for native mobile apps.

What is Cross-Device Testing

Modern cross-device testing is the system by which you verify that an application delivers the desired results on a wide variety of devices and formats. Ideally, this testing will be done quickly and continuously.

There are many articles explaining how to do CI/CD for web applications, and many companies are already doing it successfully, but there is not much information available out there about how to achieve the same for native mobile apps.

This post will shed light on the cross-device testing practices you need to implement to get a step closer to Continuous Delivery for native mobile apps.

Why is Cross-Device Testing Important

The number of mobile devices used globally is staggering. Based on the data from bankmycell.com, we have 6.64 billion smartphones in use.

Source: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world#part-1

Even if we are building and testing an app which impacts a fraction of this number, that is still a very huge number.

The below chart shows the market share by some leading smartphone vendors over the years.

Source: https://www.statista.com/statistics/271496/global-market-share-held-by-smartphone-vendors-since-4th-quarter-2009/

Challenges of Cross-Device Testing

One of the biggest challenges for testing mobile apps is that across all manufacturers combined, there are 1000s of device types in use today. Depending on the popularity of your app, this means there are a huge number of devices your users could be using. 

These devices will have variations based on:

  • OS types and versions
  • potentially customized OS
  • hardware resources (memory, processing power, etc.)
  • screen sizes
  • screen resolutions
  • storage with different available capacity for each
  • Wifi Vs using mobile data (from different carriers)
  • And many more

It is clear that you cannot run our tests on each type of device that may be used by your users. 

So how do you get quick feedback and confidence from your testing that (almost) no user will get impacted negatively when you release a new version of your app?

Mobile Test Automation Execution Strategy

Mobile Testing Strategy

Before we think about the strategy for running your automated tests for mobile apps, we need to have a good and holistic mobile testing strategy

Along with testing the app functionality, mobile testing has additional dimensions, and hence complexities as compared with web-app testing. 

You need to understand the impact of the aspects mentioned above and see what may, or may not be applicable to you.

Here are some high-level aspects to consider in your mobile testing strategy:

  • Know where and how to run the tests – real devices, emulators / simulators available locally versus in some cloud-based device farm
  • Increasing test coverage by writing less code – using Applitools Visual AI to validate functionality and user-experience
  • Scaling your test execution – using Applitools Native Mobile Library
  • Testing on different text fonts and display densities 
  • Testing for accessibility conformance and impact of dark mode on functionality and user experience
  • Chaos & Monkey Testing
  • Location-based testing
  • Testing the impact of Network bandwidth
  • Planning and setting up the release strategy for your mobile application including beta-testing, on-field testing, staged-rollouts. This differs for Google PlayStore and Apple
  • Building and testing for Observability & Analytics events

Once you have figured out your Mobile Testing Strategy, you now need to think about how and what type of automated tests can give you good, reliable, deterministic and fast feedback about the quality of your apps. This will result in you identifying the different layers of your test automation pyramid.

Remember: It is very important to execute all types of automated tests, on every code change and new app that is built. The functional / end-2-end / UI tests for your app should also be run at this time.

Additionally, you need to be able to run the tests on a local developer / qa machine, as well in your Continuous Integration (CI) system. In case of native / hybrid mobile apps, developers and QAs should be able to install the app on the (local) devices they have available with them, and run the tests against that. For CI-based execution, you need to have some form of device farm available locally in your network, or cloud-based to allow execution of the tests.

This continuous testing approach will provide you with quick feedback and allow you to fix issues almost as soon as they creep in the app. 

How to Run Functional Tests against Your Mobile Apps

Testing and automating mobile apps have additional complexities. You need to install the app in some device before your automated tests can be run against it.

Let’s explore your options for devices.

Real Devices

Real devices are ideal to run the tests. Your users/customers are going to use your app using a variety of real devices. 

In order to allow proper development and testing to be done, each team member needs access to relevant types of devices (which is subject to their user-base).

However, it is not as easy to have a variety of devices available for running the automated tests, for each team member (developer/tester). 

The challenges of having the real devices could be related to:

  • cost of procuring devices for each team member of a good variety to allow seamless development and testing work. 
  • maintenance of the devices (OS/software updates, battery issues, other problems the device may have at any point in time, etc.)
  • logistical issues like time to order and get devices, tracking of the devices assigned to the team, etc.
  • deprecating/disposing the older devices that are not used/required anymore.

Hence we need a different strategy for executing tests on mobile devices. Emulators and Simulators come to the rescue!

What is the Difference between Emulators & Simulators

Before we get into specifics about the execution strategy, it is good to understand the differences between an emulator and simulator.

Android-device emulators and iOS-device simulators make it easy for any team member to easily spin up a device.

An emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system

An emulator can mimic the operating system, software and the hardware features of the android device.

A Simulator runs on your Mac and behaves like a standard Mac app while simulating iPhone, iPad, Apple Watch, or Apple TV environments. Each combination of a simulated device and software version is considered its own simulation environment, independent of the others, with its own settings and files. These settings and files exist on every device you test within a simulation environment. 

An iOS simulator mimics the internal behavior of the device. It cannot mimic the OS / hardware features of the device.

Emulators/Simulators are a great and cost-effective way to overcome the challenges of real devices. These can easily be created as per the requirements and needs by any team member and can be used for testing and also running automated tests. You can also relatively easily set up and use the emulators / simulators in your CI execution environment.

While emulators/simulators may seem like they will solve all the problems, that is not the case. Like with anything, you need to do a proper evaluation and figure out when to use real devices versus emulators/simulators.

Below are some guidelines that I refer to.

When to use Emulators/Simulators

  • You are able to validate all application functionality
  • There is no performance impact on the application-under-test

Why use Emulators/Simulators

  • To reduce cost
  • Scale as per needs, resulting in faster feedback
  • Can use in CI environment as well

When to use Real Devices for Testing

  • If Emulators/Simulators are used, then run “Sanity” / focussed testing on real devices before release
  • If Emulators/Simulators cannot validate all application functionality reliably, then invest in Real-Device testing
  • If Emulators/Simulators cause performance issues or slowness of interactions with the application-under-test

Cases when Emulators/Simulators May not Help

  • If the application-under-test has streaming content, or has high resource requirements
  • Applications relying on hardware capabilities
  • Applications dependent on customized OS version

Cross-Device Test Automation Strategy

The above approach of using real-devices or emulators/simulators will help your team to  shift-left and achieve continuous testing. 

There is one challenge that still remains – scaling! How do you ensure your tests run correctly on all supported devices?

A classic, or rather, a traditional way to solve this problem is to repeat the automated test execution on a carefully chosen variety of devices. This would mean, if you have 5 important types of devices, and you have a 100 tests automated, then you are essentially running 500 tests. 

This approach has multiple disadvantages:

  1. The feedback cycle is substantially delayed. If 100 tests took 1 hour to complete on 1 device, 500 tests would take 5 hours (for 5 devices). 
  2. The time to analyze the test results increases by 5x 
  3. The added number of tests could have flaky behavior based on device setup / location, network issues. This could result in re-runs or specific manual re-testing for validation.
  4. You need 5x more test data
  5. You are putting 5x more load on your backend systems to cater to executing the same test 5 times

We all know these disadvantages, however, there is no better way to overcome this. Or, is there?

Modern Cross-Device Device Test Automation Strategy

The Applitools Native Mobile Library for Android and iOS apps can easily help you to overcome the disadvantages of traditional cross-device testing.

It does this by running your test on 1 device, but getting the execution results from all the devices of your choice, automatically. Well, almost automatically. This is how the Applitools Native Mobile Library works:

  1. Integrate Applitools SDK in your functional automation.
  2. In the Applitools Eyes configuration, specify all the devices you want to do your functional testing. Added bonus, you will be able to leverage the Applitools Visual AI capabilities to also get increased functional and visual test coverage.

Below is an example of how to specify Android devices for Applitools Native Mobile Library:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S9_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S8_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Pixel_4, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_8, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_9, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_Note_10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S10_Plus, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S20_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_PLUS, ScreenOrientation.PORTRAIT));
config.addMobileDevice(new AndroidDeviceInfo(AndroidDeviceName.Galaxy_S21_ULTRA, ScreenOrientation.PORTRAIT));
eyes.setConfiguration(config);

Below is an example of how to specify iOS devices for Applitools Native Mobile Library:

Configuration config = eyes.getConfiguration(); //Configure the 15 devices we want to validate asynchronously
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_12_mini));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_13_Pro_Max));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XS));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_X));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_XR));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_11));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_8));
config.addMobileDevice(new IosDeviceInfo(IosDeviceName.iPhone_7));
eyes.setConfiguration(config);   

  1. Run the test on any 1 device – available locally or in CI. It could be a real device or a simulator/emulator. 

Every call to Applitools to do a visual validation will automatically do the functional and visual validation for each device specified in the configuration above.

  1. See the results from all the devices in the Applitools dashboard

Advantages of using the Applitools Native Mobile Library

The Applitools Native Mobile Library has many advantages.

  1. You do not need to repeat the same test execution on multiple devices. This will save the team members a lot of time for execution, flaky tests and result analysis
  2. Very fast feedback of test execution across all specified devices (10x faster than traditional cross device testing approach)
  3. There is no additional test data requirements 
  4. You do not need to procure, build and maintain the devices
  5. There is less load on your application backend-system
  6. A secure solution where your application does not need to be shared out of your corporate network
  7. Using visual assertions instead of functional assertions gives you increased test coverage while writing less code

Read this post on How to Scale Mobile Automation Testing Effectively for more specific details of this amazing solution!

Summary of Modern Cross-Device Testing of Mobile Apps

Using the Applitools Visual AI allows you to extend coverage at the top of your Test Automation Pyramid by including AI-based visual testing along with your UI/UX testing. 

Using the Applitools Native Mobile Library for cross-device testing of Android and iOS apps makes your CI loop faster by providing seamless scaling across all supported devices as part of the same test execution cycle. 

You can watch my video on Mobile Testing 360deg (https://app14743.cloudwayssites.com/event/mobile-testing-360deg/) where I share many examples and details related to the above to include as part of your mobile testing strategy.

To start using the Native Mobile Library, simply sign up at the link below to request access. You can read more about the Applitools Native Mobile Library on our website.

Happy testing!

The post Modern Cross-Device Testing for Android & iOS Apps appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid https://app14743.cloudwayssites.com/blog/comparing-cross-browser-testing-tools-selenium-grid-vs-applitools-ultrafast-grid/ Wed, 29 Jun 2022 15:00:00 +0000 https://app14743.cloudwayssites.com/?p=39529 How can you choose the best cross-browser testing tool? We'll review the challenges of cross-browser testing and consider leading solutions.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

How can you choose the best cross-browser testing tool for your needs? We’ll review the challenges of cross-browser testing and consider some leading cross-browser testing solutions.

Nowadays, testing a website or an app using one single browser or device will lead to disastrous consequences, and testing the same website or app on multiple browsers using ONLY the traditional functional testing approach may lead to production issues and lots of visual bugs. 

Combinations of browsers, devices, viewports, and screen orientations (portrait or landscape) can reach the thousands. Performing manual testing on this vast amount of possibilities is no longer feasible, same as just running the usual functional testing scripts hoping to cover the most critical aspects, regions, or functionalities of our sites. 

In this article, we are going to focus on the challenges and leading solutions for cross-browser testing. 

The Challenges of Cross Browser Testing 

What is Cross Browser Testing?

Cross-browser testing makes sure that your web apps work across different web browsers and devices. Usually, you want to cover the most popular browser configurations or the ones specified as supported browsers/devices based on your organization’s products and services.

Why Do We Need Cross Browser Testing?

Basically because rendering is different and modern web apps have responsive design, but you also have to consider that each web browser handles JavaScript differently, and each browser may render things differently based on different viewports or device screen sizes. These rendering differences can result in costly bugs and negative user experience.

Challenges of Cross Browser Testing Today

Cross-browser testing has been around for quite some time now. Traditionally, testers run multiple tests and test in parallel on different browsers and this is fine, from a functional point of view. 

Today, we know for a fact that running only these kinds of traditional functional tests across a set of browsers does not guarantee your website or app’s integrity. But let’s define and understand the difference between Traditional Functional Testing and Visual Testing. Traditional functional testing is a type of software testing where the basic functionalities of an app are tested against a set of specifications. On the other hand, Visual Testing allows you to test for visual bugs, which are extremely difficult to uncover with the traditional functional testing approach.

As mentioned, traditional functional testing on its own will not capture the visual testing aspect and could lead to lack of coverage. You have to take into consideration the possibility of visual bugs, regardless of the amount of elements you actually test. Even if you tested all of them, you may encounter visual bugs that could lead to false negatives, which means, your testing was done, your tests passed and you did not capture the bug. 

Today we have mobile and IoT device proliferation, complex responsive design viewport requirements, and dynamic content. Since rendering the UI is subjective, the majority of cross-browser defects are visual.

To handle all these possibilities or scenarios, you need a tool or framework that not only runs tests but provides reliable feedback – and not just false positives or tests pending to be approved or rejected. 

When it comes to cross-browser testing, you have several options, same as for visual testing. In this article, we will explore some of the most popular cross-browser testing tools. 

Cross-Browser Testing with Your Own In-House Selenium Grid 

If you have the resources, time, and knowledge, you can spin up your own Selenium Grid and do some cross-browser testing. This may be useful based on your project size and approach.

As mentioned, if you understand the components and steps to accomplish this, go for it! 

Now, be aware, to maintain a home-grown Selenium grid cluster is not an easy task. You may find some difficulties or issues when running and maintaining hundreds of browser/nodes. Because of this, most companies end up outsourcing this tasks to vendors like Browserstack or LambdaTest, in order to save time and energy and bring more stability to their Selenium Grid infrastructure. 

Most of these vendors are really expensive, which means that you will need to have a dedicated project budget just for running your UI tests on their cloud. Not to mention the packages or plans you’ll have to acquire to run a decent amount of parallel tests.  

Considerations when Choosing Selenium Grid Solutions

When it comes to cross-browser testing and visual testing, you could use any of the available tools or frameworks, for instance LambdaTest or BrowserStack. But how can we choose? Which one is better? Are they all offering the same thing? 

Before choosing any Selenium Grid solutions, there are some key inherit issues that we must take into consideration:

  1. With a Selenium Grid Solution, you need to run each test multiple times on each and every browser/device that you would like to cover, resulting in much higher maintenance (if your tests fails 5% of the times, and you now need to run the test 10 times on 10 different environments, you are adding much more failures/maintenance overhead). 
  1. Cloud-based Selenium Grid solutions require constant connections between the machine inside your network that is running the test to the browser in the cloud for the entire test execution time. Many grid solutions have reliability issues around that causing environment/connection failure on some tests, and when executing tests at scale this results in some additional failures that the team needs to analyze.
  1. If you try to use cloud-based Selenium Grid solutions to test an internal application, you would need to setup a tunnel from the cloud grid to your company’s network, which creates a security risk and adds additional performance/reliability issues.
  2. Another critical factor for traditional “WebDriver-as-a-Service” platforms is speed. Tests could take 2-4x as much time to complete on those platforms compared to running them on local machines. 

Cross-Browser Testing with Applitools Ultrafast Grid

Applitools Ultrafast Grid is the next generation of cross-browser testing. With the Ultrafast Grid, you can run functional and visual tests once, and it instantly renders all screens across all combinations of browsers, devices, and viewports. 

Visual AI is a technology that improves snapshot comparisons. It goes deeper than pixel-to-pixel comparisons to identify changes that would be meaningful to the human eye.

Visual snapshots provide a much more robust, comprehensive, and simpler mechanism for automating verifications. Instead of writing hundreds of lines of assertions with locators, you can write a single-line snapshot capture using Applitools Eyes.

When you compound that stability with the modern cross-platform testing technology of the Ultrafast Test Grid that stability multiplies. This improved efficiency guarantees delivery of high-quality apps, on-time and without the need of multiple suites or test scripts.

Think and analyze the time that it currently takes to complete a full testing cycle on your end using traditional cross-browser testing solutions. Going from installing, writing, running, analyzing, reporting and maintaining your tests. Engineers now have the Ultrafast Grid and Visual AI technology that can be easily set on your framework, and that is also capable of testing large, modern apps across multiple environments in just minutes. 

Traditional cross-browser testing solutions that offer visual testing, are usually providing this as a separate feature or add-on that you have to pay for. What this feature does is basically taking screenshots for you to compare with other screenshots previously taken. So you can just imagine the amount of time that will take to accept or reject all these tests, and take into account that most of them will not necessarily bring useful intel, as the website or app may not change from one day to another. 

The Ultrafast Grid goes beyond simple screenshots. Applitools SDKs uploads DOM snapshots, not screenshots, to the Ultrafast Grid. Snapshots include all the resources to render a page (HTML, CSS …) and are much smaller than screenshots, so they are basically uploaded faster. 

To learn more about the Ultrafast Grid functionality and configuration, take a look at this article > https://app14743.cloudwayssites.com/docs/topics/overview/using-the-ultrafast-grid.html

Benefits and Differences when using the Applitools Ultrafast Grid

Here are some of the benefits and differences you’ll find when using this framework:

  1. The Ultrafast Grid uses containers to render web pages on different browsers in a much faster and more reliable way, maximizing speed.
  1. The Ultrafast Grid does not always upload a snapshot for every page. If a page’s resources didn’t change, Ultrafast Grid doesn’t upload them again. Since most page resources don’t change from one test run to another, there’s less to transfer, and upload times are measured in milliseconds.
  1. As mentioned above, with Applitools Ultrafast Grid, you only need to run the test once and you’ll get the results from all browsers/devices. Now that most browsers are W3C compliant, the chances of facing functional differences between different browsers (e.g. a button clicks on one browser and doesn’t click on other browsers) are negligible, so it’s sufficient to run the functional tests just once and this will still find the common browser compatibility issues like rendering/visual differences between browsers.
  1. You can use one algorithm on top of the other. Other solutions only offer the possibility of setting a level of comparison based on three modes, either Strict, Suggested (Normal) or Relax, and this is useful to some extent. But what happens if you need to select a certain region of the page to use a different comparison algorithm? Well, this is possible using the Applitools Region Types feature:
  Images courtesy of the AKC

  1. All of the above occurs on multiple browsers and devices combination at the same time. This is possible using the Ultrafast Grid configuration. For more information check out this article > https://app14743.cloudwayssites.com/docs/topics/sdk/vg-configuration.html
  1. Applitools offers a Free version that allows you to use mostly all the Framework features, This is really cool and helpful, as you will be able to explore and use high level features like Visual AI, cross-browser testing & visual testing without having to worry about the minutes left on your free trial, as with other solutions. 
  1. One of the unique and cool features of Applitools is the power of the automated maintenance capabilities that prevent the need to approve or reject the same change across different screens/devices. This reduces the overhead involved with managing baselines from different browsers and device configurations.  
Images courtesy of the AKC

Final Thoughts

Selenium Grid Solutions are everywhere, and the price varies between vendors and features. IF you have infinite time, infinite resources and infinite budget, it would be ideal to run all the tests on all the browsers and analyze the results on every code change/build. But for a company trying to optimize their velocity and run tests on every pull request/build, the Applitools Ultrafast Grid provides a compelling balance between performance, stability, cost and risk.

The post Comparing Cross Browser Testing Tools: Selenium Grid vs Applitools Ultrafast Grid appeared first on AI-Powered End-to-End Testing | Applitools.

]]>