Functional Testing Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/tag/functional-testing/ Applitools delivers full end-to-end test automation with AI infused at every step. Wed, 11 Mar 2026 19:01:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Visual, Functional, and Autonomous Testing—All in One https://app14743.cloudwayssites.com/blog/visual-functional-autonomous-testing-all-in-one/ Fri, 23 May 2025 14:47:55 +0000 https://app14743.cloudwayssites.com/?p=60594 Applitools combines proven Visual AI, intelligent test automation, and a scalable platform to help teams ship with speed and confidence. Here’s how.

The post Visual, Functional, and Autonomous Testing—All in One appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
One Platform. Three Testing Superpowers.

TL;DRApplitools brings visual, functional, and autonomous testing together in a single AI-powered platform. Backed by 11+ years of refinement and a dataset of 4 billion real-world images, our Visual AI delivers unmatched accuracy and reliability for enterprise-grade software testing.

Testing today isn’t just about coverage—it’s about confidence, speed, and scaling quality across teams. Whether you’re a developer chasing faster feedback, a QA lead reducing maintenance overhead, or a product owner focused on release velocity, Applitools helps modern teams deliver software that looks right, works right, and evolves with ease.

Here’s how Visual, Functional, and Autonomous Testing all come together in one powerful platform.

Trusted Visual AI with Proven Accuracy

Applitools sets the standard in Visual Testing. Our Visual AI engine delivers 99.9999% accuracy, eliminating false positives and catching bugs others miss.

  • 5.8x more efficient than pixel-based tools
  • Detect both functional and visual bugs in a single test
  • Works with all major frameworks: Selenium, Cypress, Playwright, and more

We didn’t just add AI—we’ve spent 11+ years perfecting it.

A Complete Platform for End-to-End Testing

Applitools goes far beyond screenshots. Our Intelligent Testing Platform includes Autonomous Test Creation, Visual Validation, Cross-Browser + Device Testing, and Accessibility Testing—all in one cloud-based solution.

  • Run tests across browsers, devices, and screen sizes in parallel
  • Built-in accessibility and compliance testing
  • Fully scalable with enterprise-grade performance

Less Test Maintenance with Self-Healing, Smart Grouping & Predictive Analytics

Spend less time fixing broken tests and more time delivering value. Applitools minimizes test upkeep so your team can focus on building.

Collaborative Testing: How Developers, PMs, Designers & Marketers All Work Smarter with Applitools

Testing shouldn’t be a bottleneck—or limited to just QA. Applitools empowers developers, designers, product managers, and even marketers to collaborate with ease.

  • Intuitive UI for reviewing results and managing baselines
  • Seamless sharing of results and issue tracking
  • Codeless and code-based authoring, no deep technical expertise needed

More than a Decade of AI Leadership

AI isn’t new to us—it’s the foundation of our platform. Unlike newer tools making AI promises, we’ve been building, training, and refining Visual AI to solve real testing challenges at scale for more than a decade.

Seamless Integrations & Dev Experience

Great testing fits into your workflow—not the other way around. Our AI-powered test automation works with your tools, languages, and CI/CD pipelines to scale quality without slowing you down. Applitools integrates with:

  • Every major framework: Selenium, Cypress, Playwright, Puppeteer, WebdriverIO
  • CI/CD tools: GitHub Actions, Jenkins, GitLab, Azure DevOps
  • SDKs for Java, JavaScript, Python, C#, and more

Whether you’re in code or no-code workflows, we plug into your stack and scale with you.

24/7 Support That Doesn’t Disappear

Whether you’re mid-sprint or troubleshooting a release, help is always within reach. Get expert guidance anytime—no hoops, no waiting.

  • Around-the-clock global technical support
  • Extensive documentation, how-tos, and real-time guidance
  • Active community forum and dedicated Customer Success Managers (not just for enterprise)

Compare that to competitors with limited support, slow response times, or no dedicated resources unless you’re a top-tier customer.

Smart Investment, Real Value

Our pricing is flexible, predictable, and scales with your needs. You’ll see ROI fast:

  • Save hours of test maintenance per sprint
  • Eliminate manual bug hunts and false positives
  • Deliver faster releases without compromising quality

Explore our current pricing structure, or speak with a testing specialist to build a package that’s right for your team.

“We reduced our testing time from days to hours. Applitools changed how we think about QA.”
— QA Lead, Global Retail Brand

Visual, Functional, and Autonomous TestingThe Applitools Advantage

We combine Visual AI, Autonomous Testing, and a developer-friendly platform into one powerful, scalable solution. With Applitools, your team gets:

  • Smarter test creation
  • Less maintenance
  • Better collaboration
  • Faster releases
  • And trusted results every time

See What’s New with Applitools Autonomous and What’s Coming with Applitools Eyes

Ready to Test Smarter?

In a crowded automation landscape, it’s not enough to have “AI-powered” features. You need real results. With over a billion visual tests run and trusted by leading enterprises across industries, Applitools isn’t experimenting with AI—it’s already delivering.

Whether you’re starting fresh or looking to scale smarter, Applitools gives your team the tools to automate with confidence and speed.

Ready to see it in action? Start your free trial, book a personalized demo, or explore the platform today.

Applitools helps you test like it’s 2025. Join the world’s top teams already doing it.

Quick Answers

What is the “Intelligent Testing Platform” offered by Applitools?

Applitools’ Intelligent Testing Platform merges Visual AI, Autonomous Test Creation, cross-browser/device testing, and accessibility/compliance validation—all in one cloud-based solution. It enables teams to test comprehensively while minimizing maintenance and scaling efficiently.

How does Applitools reduce maintenance overhead in test automation?

The platform includes self-healing locators, root cause analysis, smart grouping, and predictive analytics. These features automatically adapt tests to UI changes and make debugging smoother—meaning less flaky tests and less time spent on manual test upkeep.

Who can benefit from using Applitools beyond just QA engineers?

Applitools supports developers, designers, product managers, and marketers, not only QA. A user-friendly interface allows easy sharing of results and issue tracking. Additionally, you can author tests using both codeless and code-based methods—so even non-technical team members can participate effectively.

Who uses Applitools, and how has its AI been developed?

Applitools has been training and developing its AI models for over 11 years, using a dataset of more than 4 billion images from real applications. Today, the platform is trusted by 400+ enterprise customers across industries including finance, retail, media, B2B tech, and healthcare. This breadth of usage ensures highly accurate, production-grade AI for visual and functional testing at scale.

The post Visual, Functional, and Autonomous Testing—All in One appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design https://app14743.cloudwayssites.com/blog/autonomous-2-1-release/ Tue, 07 Jan 2025 15:41:23 +0000 https://app14743.cloudwayssites.com/?p=59297 Testing workflows just got an upgrade! Autonomous 2.1 simplifies end-to-end testing by offering faster test creation and maintenance while expanding mobile support. This release focuses on improving the efficiency and...

The post Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Autonomous reusable test flow screenshot

Testing workflows just got an upgrade! Autonomous 2.1 simplifies end-to-end testing by offering faster test creation and maintenance while expanding mobile support. This release focuses on improving the efficiency and flexibility of your testing processes with features designed to simplify workflows, enhance modularity, and provide greater control.

Let’s explore the highlights of this release and how they can benefit your end-to-end testing. 

Key Features in Autonomous 2.1

Modular Test Design

Create a test once and reuse it across multiple workflows, eliminating the need to duplicate steps. This not only saves time but also ensures consistency and reduces the risk of errors. By enhancing reusability, this feature significantly accelerates test creation and simplifies maintenance, particularly for workflows shared across teams.

Use case example: Faster test authoring in Banking

For financial institutions, reusing a shared “User Login” test across workflows like “Fund Transfer,” “Loan Application,” and “Account Overview” simplifies test maintenance and ensures consistency. Autonomous 2.1 empowers teams to avoid recreating login steps for each flow, saving time and enhancing reliability.

Simplify Test Maintenance

Easily modularize your testing process by extracting repeated steps into standalone tests. This feature simplifies maintenance and improves by allowing your team to maintain one test and apply those updates to wherever you may be reusing extracted test flows. With fewer errors and a more streamlined test design, your team can focus on delivering quality faster.

Learn how to build no-code, autonomous end-to-end tests and explore some of these new features in our upcoming webinar, Building No-Code Autonomous End-End Tests

Record Assertions

Record assertions directly within your tests to ensure critical content is displayed accurately, without requiring manual scripting. This capability is especially valuable for localization efforts, enabling teams to validate translated content on multilingual websites. By reducing setup complexity, testers can achieve faster and more accurate validations.

Use case example: Maintain compliance with localization

On multilingual websites, recording assertions ensures translated content (e.g., terms and conditions, error messages) is displayed correctly. This is crucial for maintaining legal compliance and providing a seamless user experience in different languages.

Make Responsive Testing Easier

Ensure cross-device compatibility by configuring display viewport sizes—such as mobile, tablet, or even a custom size—directly in your custom flow tests. This ensures accurate validations across devices, making it ideal for e-commerce businesses and other industries prioritizing multi-device compatibility. Streamlining these configurations reduces setup time and ensures your users have a seamless experience across platforms and devices.

Pro tip: While Applitools Ultrafast Grid lets you run the same test across multiple browsers and devices, we recommend creating separate tests for specific screen sizes when unique screen elements, like a hamburger menu, appear only on certain viewports.

Flexibility in Test Configuration

We’ve decoupled the application and plan from individual tests, offering greater flexibility and a more intuitive workflow for configuring tests independently. You can now run tests ad-hoc—even trigger test execution via the REST API—and configure run properties like parameters, browsers, devices, and concurrency limits. This change not only enhances the user experience but also lays the groundwork for advanced capabilities in future releases, ensuring a scalable and adaptable testing environment.

And There’s More…

Autonomous 2.1 lets you navigate more intuitively with an updated interface reflecting application hierarchy and copy tests across applications while preserving reusable flows. For added security, sensitive test data like passwords is now encrypted and masked, and TOTP support for 2FA enables secure workflows with Time-Based One-Time Passwords.

Try Autonomous 2.1 for Free With Your Team

With Autonomous 2.1, you’ll experience faster test creation and simplified maintenance, particularly for workflows reused across multiple teams or applications. This release represents a significant step toward more efficient, scalable, and user-friendly testing processes.

Try out Autonomous 2.1 for free with your team and see how it can revolutionize your testing efforts.

The post Introducing Autonomous 2.1: Speed Up Testing with Modular Test Design appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Functional Testing? Types and Example (Full Guide) https://app14743.cloudwayssites.com/blog/functional-testing-guide/ Thu, 19 Sep 2024 17:12:00 +0000 https://app14743.cloudwayssites.com/?p=38369 Learn what functional testing is in this complete guide, including an explanation of functional testing types and examples of techniques.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

What is Functional Testing?

In functional testing, testers evaluate an application’s basic functionalities against a predetermined set of specifications. Using Black Box Testing techniques, functional tests measure whether a given input returns the desired output, regardless of any other details. Results are binary: tests pass or fail.

Why is Functional Testing Important?

Functional testing is important because, without it, you may not accurately understand whether your application functions as intended. An application may pass non-functional tests and otherwise perform well. Still, if the application doesn’t deliver the key expected outputs to the end-user, it cannot be considered working.

What is the Difference between Functional and Non-Functional Testing?

Functional tests check if the application meets specified functional requirements, while non-functional tests assess aspects like performance, security, scalability, and overall quality. To put it another way, functional testing is concerned with whether key functions are operating, and non-functional tests are more concerned with how the operations take place.

Examples of Functional Testing Types

There are many types of functional tests that you may want to complete as you test your application. 

A few of the most common include:

Unit Testing

Unit testing breaks down the desired outcome into individual units, allowing you to test if a small number of inputs (sometimes just one) produce the desired output. These tests tend to be small and quick to write and execute. Each one is designed to cover a single section of code (a function, method, object, etc.) and verify that code’s functionality.

Smoke Testing

Testers perform smoke testing to verify that the most critical parts of the application work as intended. It’s a first pass through the testing process and isn’t meant to be exhaustive. Smoke tests ensure that the application is operational on a basic level. If it’s not, there’s no need to progress to more detailed testing, and the application can go right back to the development team for review.

Sanity Testing

Sanity testing acts as a cousin to smoke testing, verifying basic functionality to potentially bypass detailed testing on broken software. Unlike smoke tests, sanity tests occur later in the process to confirm whether a new code change achieves its intended effect. This ‘sanity check’ ensures the new code roughly performs as expected. 

Integration Testing

Integration testing determines whether combinations of individual software modules function properly together. Individual modules may already have passed independent tests, but when they are dependent on other modules to operate successfully, this kind of testing is necessary to ensure that all parts work together as expected.

Regression Testing

Regression testing makes sure that the addition of new code does not break existing functionalities. In other words, did your new code cause the quality of your application to “regress” or go backward? Regression tests focus on recent changes and ensure that the whole application remains stable and functions as expected.

User Acceptance Testing (UAT)/Beta Testing

Usability testing involves exposing your application to a limited group of real users in a production environment. Teams use feedback from live users—who have no prior experience with the application and may uncover critical bugs unknown to internal teams—to make further changes before a full launch.

UI/UX Testing 

UI/UX testing evaluates the application’s graphical user interface. The performance of UI components such as menus, buttons, text fields, and more are verified to ensure that the user experience is ideal for the application’s users. UI/UX testing is also known as visual testing and can be manual or automated.

Other classifications of functional testing include black box testing, white box testing, component testing, API testing, system testing, and production testing.

How to Perform Functional Testing

The essence of a functional test involves three steps:

  • Determine the desired test input values
  • Execute the tests
  • Evaluate the resulting test output values

Essentially, when you execute a task with input (e.g., enter an email address into a text field and click submit), does your application generate the expected output (e.g., the user is subscribed and a thank-you page is displayed)?

We can understand this further with a quick example.

Functional Testing Example

Let’s begin with a straightforward application: a calculator. 

To create a set of functional tests, you would need to:

  • Evaluate all the possible inputs – such as numbers and mathematical symbols – and design assertions to test their functionality
  • Execute the tests (either automated or manually)
  • Ensure that the application generates the desired outputs—for example, each mathematical function works as intended, the final result appears correctly in all cases, and the formula history displays accurately.

For more on how to create a functional test, you can see a full guide on how to write an automated functional test for this example.

Functional Testing Techniques 

There are many functional testing techniques you might use to design a test suite for this:

  • Boundary value tests evaluate what happens if inputs are received outside of specified limits – such as a user entering a number that was too large (if there is a specified limit) or attempting to enter non-numeric input
  • Decision-based tests verify the results after a user decides to take an action, such as clearing the history
  • User-based tests evaluate how components work together within an application – if the calculator’s history was stored in the cloud, this kind of test would verify that it did so successfully
  • Ad-hoc tests can be done at the end to try and discover bugs other methods did not uncover by seeking to break the application and check its response

Other common functional testing techniques include equivalence testing, alternate flow testing, positive testing and negative testing.

Automated Functional Testing vs Manual Functional Testing

Manual functional testing requires a developer or test engineer to design, create, and execute every test by hand. It is flexible and can be powerful with the right team. However, as software grows in complexity and release windows get shorter, a purely manual testing strategy will face challenges in keeping up a large degree of test coverage.

Automated functional testing automates many parts of the testing process, allowing tests to run continuously without human interaction – and with less chance for human error. Recent improvements in AI mean that an increasing share of the design and analysis load can be handled autonomously with the right tool.

How to Use Automated Visual Testing for Functional Tests

One way to automate your functional tests is by using automated visual testing. Automated visual testing uses Visual AI to view software in the same way a human would, and can automatically highlight any unexpected differences with a high degree of accuracy.

Visual testing allows you to test for visual bugs, which are otherwise extremely challenging to uncover with traditional functional testing tools. For example, if an unrelated change shifts a ‘submit’ button to the far right of the page, making it unclickable for the user yet still technically present with the correct identifier, it would pass a traditional functional test. Visual testing, however, would catch this bug and ensure that functionality remains unaffected by visual regressions.

How to Choose an Automated Testing Tool?

Here are a few key considerations to keep in mind when choosing an automated testing tool:

  • Ease of Use: Is it something easy for your existing QA team to use, or easy to hire for? Does it require an extensive learning curve or can it be picked up quickly?
  • Flexibility: Can it be used across different platforms? Can it easily integrate with your current testing environment, and does it allow you the freedom to change your environment in the future?
  • Reusability/AI Assistance: How easy is it to reuse tests, particularly if the UI changes? Is there meaningful AI that can help you test more efficiently, particularly at the scale you need?
  • Support: What level of customer support do you require, and how easily can you receive it from the provider of your tool?

Automated testing tools can be paid or open source. Some popular open source tools include Selenium for web testing and Appium for mobile testing.

Why Choose Automated Visual Testing with Applitools

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy. Applitools includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Keep Learning

Looking to learn more about Functional Testing? Check out the resources below to find out more.

What tools are used for functional testing?

Popular tools for functional testing include Selenium, Playwright, and Applitools, which help automate testing processes and improve accuracy.

How do manual and automated functional testing differ?

Manual functional testing involves human testers executing tests, while automated functional testing uses scripts and tools to run tests quickly and repeatedly.

What are common types of functional testing?

Common types include unit testing, integration testing, system testing, and acceptance testing, each targeting different stages of the software development process.

What is functional testing in software development?

Functional testing evaluates an application’s core functionalities to ensure they work as expected, focusing on user requirements and interactions.

How does functional testing differ from non-functional testing?

Functional testing focuses on “what” the software does, verifying features and behaviors, while non-functional testing assesses “how” the software performs, covering aspects like speed and scalability.

How often should functional tests be run?

Teams should run functional tests continuously during development and before major releases to catch and fix issues early, ensuring quality at every stage.

The post What is Functional Testing? Types and Example (Full Guide) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Top 10 Visual Testing Tools https://app14743.cloudwayssites.com/blog/top-10-visual-testing-tools/ Tue, 13 Aug 2024 14:06:00 +0000 https://app14743.cloudwayssites.com/?p=48210 Introduction Visual regression testing, which validates user interfaces, plays a critical role in DevOps and CI/CD pipelines. The UI often impacts an application’s drop-off rate and directly affects customer experience....

The post Top 10 Visual Testing Tools appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Introduction

Visual regression testing, which validates user interfaces, plays a critical role in DevOps and CI/CD pipelines. The UI often impacts an application’s drop-off rate and directly affects customer experience. A malfunctioning front end harms a tech brand’s reputation and must be avoided at all costs.

What is Visual Testing?

Manual testing procedures are not enough to understand intricate UI modifications. Automation scripts could be a solution but are often tedious to write and deploy. Visual testing, therefore, is a crucial element that determines changes to the UI and helps devs flag unwanted modifications. 

Each visual regression testing cycle follows a similar structure: the tool captures and stores baseline images or screenshots of a UI. After every source code change, the visual testing tool takes snapshots of the interface and compares them with the baseline repository. If the images don’t match, the test fails, and the tool generates a report for the development team.

Revolutionizing visual testing is Visual AI – a game-changing technology that automates the detection of visual issues in user interfaces. It also enables software testers to improve the accuracy and speed of testing. With machine learning algorithms, Visual AI can analyze visual elements and compare them to an established baseline to identify changes that may affect user experience. 

From font size and color to layout inconsistencies, Visual AI can detect issues that would otherwise go unnoticed. Automated visual testing tools powered by Visual AI, such as Applitools, improve testing efficiency and provide faster and more reliable feedback. The future of visual testing lies in Visual AI, and it has the potential to significantly enhance the quality of software applications.

Benefits of Visual Testing for Functional Testing

Visual testing plays a critical role in software testing by analyzing an application’s user interface and user experience. It ensures that the software looks and behaves as expected, displaying all elements correctly across different devices and platforms. Visual testing detects issues like layout inconsistencies, broken images, and text overlaps that could harm the user experience.

Automated visual testing tools like Applitools can scan web and mobile applications and identify any changes to visual elements. Effective visual testing can help improve application usability, increase user satisfaction, and ultimately enhance brand loyalty.

Visual testing and functional testing complement each other as two essential components of software testing. Functional testing ensures that the application’s features work as expected, while visual testing verifies that visual elements like layout, fonts, and images display correctly. Visual testing enhances functional testing by expanding test coverage, reducing testing time and resources, and improving testing accuracy

Some more benefits of visual testing for functional testing are as follows:

  1. Quicker test script creation: Automated visual tests for a page or region can replace tedious functional tests with unreliable assertion code. Applitools Eyes captures your screen and sends it to the Visual AI system for in-depth analysis.
  1. Slash debugging time to minutes: Visual testing slashes debugging functional tests to minutes. Applitools’ Root Cause Analysis on web app bugs shows the CSS and DOM differences, enhancing visual variance and cutting time requirements.
  1. Maintaining functional tests more effectively: Applitools Eyes, which uses Visual AI, makes a collection of similar modifications from various screens of the application. Each change can then be classified as expected or unexpected with one easy click, making it much simpler than evaluating assertion codes.

Further reading: https://app14743.cloudwayssites.com/solutions/functional-testing/

Top 10 Visual Testing Tools

The following section consists of 10 visual testing tools that you can integrate with your current testing suite.

1. Applitools

Applitools is one of the most popular tools in the market and is best known for using AI in visual regression testing. It offers feature-rich products like Eyes, Ultrafast Test Cloud, and Ultrafast Grid for efficient, intelligent, and automated testing.

Applitools is 20x faster than conventional test clouds, is highly scalable for your growing enterprise, and is super simple to integrate with all popular frameworks, including Selenium, WebDriver IO, and Cypress. The tool is state of the art for all your visual testing requirements, with the ‘smarts’ to know what minor changes to ignore, without any prior settings.

Applitools’ Auto-Maintenance and Auto-Grouping features are handy. According to the World Quality Report 2022-23, maintainability is the most important factor in determining test automation approaches, but it often requires a sea of testers and DevOps professionals on their toes, ready to resolve a wave of bugs. 

Cumbersome and expensive, this can break your strategies and harm your reputation. Auto-Grouping categorizes the bugs as Auto-Maintenance resolves them while offering you the flexibility to jump in wherever needed. Applitools enters the movie here. 

Applitools Eyes is a Visual AI product that dramatically minimizes coding while maximizing bug detection and test updation. Eyes mimics the human eye to catch visual regressions with every app release. It can identify dynamic elements like ads or other customizations and ignore or compare them as desired.

Features:

  • Applitools invented Visual AI – a concept combining artificial intelligence with visual testing, making the tool indispensable in a competitive market. 
  • Applitools Eyes is intelligent enough to ignore dynamic content and minor modifications, without your intervention.
  • Applitools acts as an extension to your available test suite. It integrates seamlessly with all popular leading test automation frameworks like Selenium, Cypress, Playwright and others, as well as low-code tools like Tosca, Testim.io, and Selenium IDE.
  • Applitools provides Smart Assist and suggests improvements to your tests. You can analyze the generated report containing high-fidelity snapshots with regressions highlighted and execute the recommended tests with one click. 
  • Applitools simplifies bug fixes by automating maintenance – a feature that can minimize your testing hassles to almost zero.

Advantages:

  • Applitools makes cross-browser testing a breeze. With its Ultrafast Test Cloud, you can test your app across varying devices, browsers, and viewports with much faster and more efficient throughput. 
  • Not only does Eyes allow mobile and web access, but it also facilitates testing on PDFs and Components. 
  • Applitools is all for cyber security and eliminates the requirement for tunnel configuration. You can choose where to deploy the tool – a private cloud or a public one, without any security woes. 
  • Applitools uses Root Cause Analysis to tell you exactly where the regressions are without any unnecessary information or jargon.

Read more: Applitools makes your cross-browser testing 20x faster. Sign up for a free account to try this feature.

2. Aye Spy

A visual regression tool, often underrated, Aye Spy is open-source and heavily inspired by BackstopJS and Wraith. At its core, the creators had one issue they wanted to challenge- performance. The visual regression tools in the market are missing this key element that Aye Spy finally decided to incorporate with 40 UI comparisons in under 60 seconds (with optimum setup, of course)!

Features:

  • Aye Spy requires Selenium Grid to work. Selenium Grid aids parallel testing on several computers, helping devs breeze through cross-browser testing. The creators of Aye Spy recommend using Docker images of Selenium for consistent results.
  • Amazon’s S3 is a data storage service used by firms across the globe. Aye Spy supports AWS S3 bucket for storing snapshots in the cloud.
  • The tool aims to maximize the testing performance by comparing up to 40 images in less than a minute with a robust setup. 

Advantages:

  • Aye Spy comes with clean documentation that helps you navigate the tool efficiently.
  • It is easy to set up and use. Aye Spy comes in a Docker package that is simple and straightforward to execute on multiple machines.

3. Hermione.js

Hermione, an open-source tool, streamlines integration and visual regression testing although only for more straightforward websites. It is easier to kickstart Hermione with prior knowledge of Mocha and WebdriverIO, and the tool facilitates parallel testing across multiple browsers. Additionally, Hermione effectively uses subprocesses to tackle the computation issues associated with parallel testing. Besides this, the tool allows you to segregate tests from a test suite by only adding a path to the test folder. 

Features:

  • Hermione reruns failed tests but uses new browser sessions to eliminate issues related to dynamic environments. 
  • Configure Hermione with either the DevTools or the WebDriver Protocol, which requires Selenium Grid (using Selenium-standalone packages) for the latter.

Advantages:

  • Hermione is user-friendly, allows custom commands, and offers plugins as hooks. Developers use this attribute to design test ecosystems.
  • Incidental test fails are considerably minimized with Hermione by re-executing failed testing events.

4. Needle

Needle, supported by Selenium and Nose, is an open-source tool that is free to use. It follows the conventional visual testing structure and uses a standard suite of previously collected images to compare the layout of an app.

Features:

  • Needle executes the ‘baseline saving’ settings first to capture the initial screenshots of the interface. Running the same test again activates testing mode, taking new snapshots and comparing them against the test suite.
  • Needle allows you to play with viewport sizes to optimize testing interactive websites.
  • Needle uses ImageMagick, PerceptualDiff, and PIL for screenshots, the latter being the default. ImageMagick and PerceptualDiff are faster than PIL and generate separate PNG files for failed test cases, distinguishing between the test and current layouts.

Advantages:

  • Needle saves images to your local machine, allowing you to archive or delete them. File cleanup can be easily activated from the CLI.
  • Needle has straightforward documentation that is beginner-friendly and easy to follow.

5. Vizregress

Colin Williamson created Vizregress, a popular open-source tool, as a research project based on AForge.Net. He aimed to resolve a crucial issue: Selenium WebDriver, which Vizregress uses in the background, couldn’t distinguish between layouts when CSS elements stayed the same but the visual representation changed. This was a problem that could disrupt a website. 

Vizregress uses AForge attributes to compare every pixel of the images (new and baseline) to determine if they are equal. This is a complex task that does not deny its fragility. 

Features:

  • Vizregress automates visual regression testing using Selenium WebDriver. It uses Jenkins for continuous delivery. 
  • Vizregress allows you to mark zones on your webpage that you would like the tool to ignore during testing.
  • Vizregress requires consistent browser attributes like version and size.

Advantages:

  • Vizregress combines the features of Selenium WebDriver and AForge to provide a robust solution to a complex problem. 
  • Based on pixel analysis, the tool does an excellent job of identifying differences between baseline and new screenshots.

6. iOSSnapshotTestCase

Jonathan Dann and Todd Krabach created iOSSnapshotTestCase, previously known as FBSnapshotTestCase, and originally developed it within Facebook—though Uber now maintains it. This tool follows a visual testing structure, comparing test screenshots with baseline images of the UI.

iOSSnapshotTestCase uses tools like Core Animation and UIKit to generate screenshots of an iOS interface. These are then compared to specimen images in a repository. The test inevitably fails if the snapshots do not match. 

Features:

  • iOSSnapshotTestCase renames screenshots on the disk automatically. The names are generated based on the image’s selector and test class. Additionally, the tool generates a description of all failed tests.
  • The tool must be executed inside an app bundle or the Simulator to access UIKit. However, screenshot tests can still be written inside a framework but have to be saved as a test library bundle devoid of a Test Host.
  • A single test on iOSSnapshotTestCase can accommodate several screenshots. The tool also offers an identifier for this purpose.

Advantages:

  • iOSSnapshotTestCase facilitates a screenshot to have multiple tests for devices and several operating systems.
  • The tool automates manual tasks like renaming test cases and generates failure messages.

7. VisualCeption

VisualCeption uses a straightforward, 5-step process to perform visual regression testing. It uses WebDriver to capture a snapshot, JavaScript for calculating element sizes and positions, and Imagick for cropping and comparing visual components. An exception, if raised, is handled by Codeception.

It is essential to note here that VisualCeption is a function created for Codeception. Hence, you cannot use it as a standalone tool – you must have access to Codeception, Imagick, and WebDriver to make the most out of it.

Features:

  • VisualCeption generates HTML reports for failed tests.
  • The visual testing process spans 5 steps. However, the long list of tool prerequisites could become a team’s limitation.

Advantages:

  • VisualCeption is user-friendly once the setup is complete.
  • The report generation is automated on VisualCeption and can help you visualize the cause of test failure.

8. BacktopJS

BackstopJS is a testing tool that can be seamlessly integrated with CI/CD pipelines for catching visual regressions. Like other tools mentioned above, BackstopJS compares webpage screenshots with a standard test suite to flag any modifications exceeding a minimum threshold.

A popular visual testing tool, BackstopJS has formed the basis of similar tools like Aye Spy. 

Features

  • BackstopJS can be easily automated using CI/CD pipelines to catch and fix regressions as and when they appear.
  • Report generation is hassle-free and elaborates why a test failed – with appropriately marked components highlighting the regressions.
  • BackstopJS can be configured for multiple devices and operating systems, and take into account varying resolutions and viewport sizes.

Advantages:

  • BackstopJS is open-source and hence, free to use. You can customize the tool per your demands (although this could often be more expensive in terms of resources).
  • The tool is easy to operate with an intuitive, beginner-friendly interface.

9. Visual Regression Tracker

Visual Regression Tracker is an exciting tool that goes the extra mile to protect your data. It is self-hosted, meaning your information is unavailable outside your intranet network. 

In addition to the usual visual testing procedure, the tool helps you track your baseline images to understand how they change over time. Moreover, Visual Regression Tracker supports multiple languages including Python, Java, and JavaScript. 

Features:

  • Visual Regression Tracker is simple to use and more straightforward to automate. It has no preferences in terms of automation tools and can be integrated easily with any of your preferences. 
  • The tool can ignore areas of an image you don’t want it to consider during testing.
  • Visual Regression Tracker can work on any device, including smartphones, as long it provides the provision for screenshots. 

Advantages:

  • The tool is open-source and user-friendly. It is available in a Docker container, making it easy to set up and kickstart testing.
  • Your data is kept safe within your network with the self-hosting capabilities of Visual Regression Tracker.

10. Galen Framework

Galen Framework is an open-source tool for testing web UI. It is primarily used for interactive websites. Although developed in Java, the tool offers multi-language support, including CSS and JavaScript. Galen Framework runs on Selenium Grid and can be integrated with any cloud testing platform. 

Features:

  • Galen is great for testing responsive website designs. It allows you to specify the screen size and then reformat the browser window to capture screenshots as required.
  • Galen has built-in functions that facilitate more straightforward testing methods. These modules support complex operations like color scheme verification.

Advantages:

  • Galen Framework simplifies testing with enhanced syntax readability. 
  • The tool also offers HTML reports generated automatically for easy visualization of test failures.

Takeaway

Here is a quick recap of all 10 tools mentioned above:

  1. Applitools: It has numerous offerings from Eyes to Ultrafast Test Cloud that automate the visual testing process and make it smart. Customers have noted a 50% reduction in maintenance efforts and a 75% reduction in testing time. With Applitools, AI validation takes the front-row seat and helps you create robust test cases effortlessly while saving you the most critical resource in the world – time.
  1. Aye Spy: It helps you take 40 screenshots in less than a minute. Aye Spy could be your solution if you are looking for a high-performance tool.
  1. Hermione: Hermione.js eliminates environment issues by re-implementing failed tests in a new browser session. This minimizes unexpected failures. 
  1. Needle: Besides the usual visual regression testing functionalities, the tool makes file clearance easy. You choose to either archive or delete your test images.
  1. Vizregress: Vizregress analyzes and compares every pixel to mark regressions. If your browser attributes (like size and version) remain constant throughout your testing process, Vizregress can be a good tool.
  1. iOSSnapshotTestCase: The tool caters to all apps for your iOS devices and automates test case naming and report generation.
  1. VisualCeption: Built for Codeception, VisualCeption uses several frameworks to achieve the desirable results. The con is that the prerequisites are plenty and can be easily avoided with any of the top 2 tools on this list (note: Aye Spy requires Selenium Grid to function). 
  1. BackstopJS: Multiple viewport sizes and screen resolutions can be seamlessly handled by BackstopJS. Want a tool for multi-device testing? BackstopJS could be a good choice.
  1. Visual Regression Tracker: A holistic tool overall, Visual Regression Tracker allows you to mark sections of your image that you would like the tool to ignore. This makes your testing process more flexible and efficient.
  1. Galen Framework: Galen has built-in methods that make repetitive functionalities easier.

The following comparison chart gives you an overview of all crucial features at a glance. Note how most tools have attributes that are ambiguous or undocumented. Applitools stands out in this list, giving you a clear view of its properties.

This summary gives you a good idea of the critical features of all the tools mentioned in this article. However, if you are looking for one tool that does it all with minimal resources and effort, select Applitools. Not only did they spearhead Visual AI testing, but they also fully automate cross-browser testing, requiring little to no intervention from you.

Customers have reported excellent results – 75% less time required for testing and 50% minimization in upkeep endeavors. To know how Applitools can seamlessly integrate with your DevOps pipeline, request your demo today, or register for a free Applitools account.

Quick Answers

What is visual regression testing, and why is it important for UI?

Visual regression testing validates a user interface by checking for unintended visual changes after updates. It’s crucial because a malfunctioning UI can negatively impact user experience, causing higher drop-off rates and potentially damaging a brand’s reputation.

How does visual testing work in automated pipelines?

Visual testing tools capture baseline images of the UI and compare them to images taken after each code change. When differences are detected, the tool flags them and generates a report, helping development teams quickly identify and address unwanted modifications.

What role does Visual AI play in visual testing?

Visual AI uses machine learning to analyze visual elements and identify meaningful changes that could affect user experience. It improves testing accuracy by recognizing subtle issues like layout shifts or color changes, while ignoring minor, non-impactful differences.

How does visual testing benefit functional testing?

Visual testing enhances functional testing by covering visual elements like layout, fonts, and images, ensuring they display correctly across devices. This combination broadens test coverage, reduces time and resources, and improves test accuracy by catching UI issues early.

Why is Applitools recommended for visual testing?

Applitools leverages Visual AI to automate and optimize visual regression testing, providing features like Auto-Maintenance, Ultrafast Test Cloud, and Smart Assist. It’s widely compatible with popular frameworks, making it easy to integrate and effective in reducing testing time and maintenance efforts.

How do visual testing tools like Applitools impact DevOps and CI/CD pipelines?

Visual testing tools integrate with DevOps and CI/CD pipelines to provide continuous feedback on UI changes. Tools like Applitools ensure that every release undergoes thorough visual checks, helping teams maintain high-quality user experiences even with rapid code changes.

The post Top 10 Visual Testing Tools appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Functional Testing’s New Friend: Applitools Execution Cloud https://app14743.cloudwayssites.com/blog/functional-testings-new-friend-applitools-execution-cloud/ Mon, 11 Sep 2023 19:59:03 +0000 https://app14743.cloudwayssites.com/?p=51735 Dmitry Vinnik explores how the Execution Cloud and its self-healing capabilities can be used to run functional test coverage.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Execution Cloud main image

In the fast-paced and competitive landscape of software development, ensuring the quality of applications is of utmost importance. Functional testing plays a vital role in verifying the robustness and reliability of software products. With the increasing complexity of applications with a long list of use cases and the need for faster release cycles, organizations are challenged to conduct thorough functional testing across different platforms, devices, and screen resolutions. 

This path to a better quality of software products is where Applitools, a leading provider of functional testing solutions, becomes a must-have tool with its innovative offering, the Execution Cloud.

Applitools’ Execution Cloud is a game-changing platform that revolutionizes functional testing practices. By harnessing the power of cloud computing, the Execution Cloud eliminates the need for resource-heavy local infrastructure, providing organizations with enhanced efficiency, scalability, and reliability in their testing efforts. The cloud-based architecture integrates with existing testing frameworks and tools, empowering development teams to execute tests across various environments effortlessly.

This article explores how the Execution Cloud and its self-healing capabilities can be used to run our functional test coverage. We demonstrate this cloud platform’s features, like auto-fixing selectors caused by a change in the production code. 

Why Execution Cloud

As discussed, the Applitools Execution Cloud is a great tool to enhance any team’s quality pipeline.

One of the main features of this cloud platform is that it can “self-heal” our tests using AI. For example, if, during refactoring or debugging, one of the web elements had its selectors changed and we forgot to update related test coverage, the Execution Cloud would automatically fix our tests. This cloud platform would use one of the previous runs to deduce another relevant selector and let our tests continue running. 

This self-healing capability of the Execution Cloud allows us to focus on actual production issues without getting distracted by outdated tests. 

Functional Testing and Execution Cloud

It’s fair to say that Applitools has been one of the leading innovators and pioneers in visual testing with its Eyes platform. However, with the Execution Cloud in place, Applitools offers its users broader, more scalable test capabilities. This cloud platform lets us focus on all types of functional testing, including non-Visual testing.

One of the best features of the Execution Cloud is that it’s effortless to integrate into any test case with just one line. There is also no requirement to use the Applitools Eyes framework. In other words, we can run any functional test without creating screenshots for visual validation while utilizing the self-healing capability of the Execution Cloud.

Adam Carmi, Applitools CTO, demos the Applitools Execution Cloud and explores how self-healing works under the hood in this on-demand session.

Writing Test Suite

As we mentioned earlier, the Execution Cloud can be integrated with most test cases we already have in place! The only consideration is at the time of writing this post, the current version of the Execution Cloud only supports Selenium WebDriver across all languages (Java, JavaScript, Python, C#, and Ruby), WebdriverIO, and any other WebDriver-based framework. However, more test frameworks will be supported in the near future.

Fortunately, Selenium is a highly used testing framework, giving us plenty of room to demonstrate the power of the Execution Cloud and functional testing.

Setting Up Demo App

Our demo application will be a documentation site built using the Vercel Documentation template. It’s a simple app that uses Next.js, a React framework created by Vercel, a cloud platform that lets us deploy web apps quickly and easily.

To note, all the code for our version of the application is available here.

First, we need to clone the demo app’s repository: 

git clone git@github.com:dmitryvinn/docs-demo-app.git

We will need Node.js of version 10.13 to work with this demo app, which can be installed by following the steps here.

After we set up Node.js, we should open a terminal and run the following command to install the necessary dependencies:

npm install

The next step is to navigate into the project’s directory and start the app locally:

cd docs-demo-app

npm run dev

Now our demo app is accessible at ‘http://localhost:3000/’ and ready to be tested.

Docs Demo App 

Deploying Demo App

While the Execution Cloud allows us to run the tests against a local deployment, we will simulate the production use case by running our demo app on Vercel. The steps for deploying a basic app are very well outlined here, so we won’t spend time reviewing them. 

After we deploy our demo app, it will appear as running on the Vercel Dashboard:

Demo App Deployed on Vercel

Now, we can write our tests for a production URL of our demo application available at `https://docs-demo-app.vercel.app/`.

Setting Up Test Automation

Execution Cloud offers great flexibility when it comes to working with our tests. Rather than re-writing our test suites to run against this self-healing cloud platform, we simply need to update a few lines of code in the setup part of our tests, and we can use the Execution Cloud. 

For our article, our test case will validate navigating to a specific page and pressing a counter button. 

To make our work even more effortless, Applitools offers a great set of quickstart examples that were recently updated to support the Execution Cloud. We will start with one of these samples using JavaScript with Selenium WebDriver and Jest as our baseline.

We can use any Integrated Development Environment (IDE) to write tests like IntelliJ IDEA or Visual Studio Code. Since we use JavaScript as our programming language, we will rely on NPM for the build system and our test runner.

Our tests will use Jest as its primary testing framework, so we must add a particular configuration file called `jest.config.js`. We can copy-paste a basic setup from here, but in its shortest form, the required configurations are the following.

module.exports = {

    clearMocks: true,

    coverageProvider: "v8",

  };

Our tests will require a `package.json` file which should include Jest, Selenium WebDriver, and Applitools packages. Our dependencies’ part of the `package.json` file should eventually look like the one below:

"dependencies": {

      "@applitools/eyes-selenium": "^4.66.0",

      "jest": "^29.5.0",

      "selenium-webdriver": "^4.9.2"

    },

After we install the above dependencies, we are ready to write and execute our tests.

Writing the Tests

Since we are running a purely functional Applitools test with its Eyes disabled (meaning we do not have a visual component), we will need to initialize the test and have a proper wrap-up for it.

In `beforeAll()`, we can set our test batching and naming along with configuring an Applitools API key.

To enable Execution Cloud for our tests, we need to ensure that we activate this cloud platform on the account level. After that’s done, in our tests’ setup, we will need to initialize the WebDriver using the following code:

let url = await Eyes.getExecutionCloudUrl();

driver = new Builder().usingServer(url).withCapabilities(capabilities).build();

For our test case, we will open a demo app, navigate to another page, press a counter button, and validate that the click incremented the value of clicks by one.

describe('Documentation Demo App', () => {

…

    test('should navigate to another page and increment its counter', async () => {

       // Arrange - go to the home page

       await driver.get('https://docs-demo-app.vercel.app/');

       // Act - go to another page and click a counter button

        await driver.findElement(By.xpath("//*[text() = 'Another Page']")).click();

        await driver.findElement(By.className('button-counter')).click();

      // Assert - validate that the counter was clicked

        const finalClickCount = await driver.findElement(By.className('button-counter')).getText();

        await expect(finalClickCount).toContain('Clicked 1 times');

    }

…

Another critical aspect of running our test is that it’s a non-Eyes test. Since we are not taking screenshots, we need to tell the Execution Cloud when a test begins and ends. 

To start the test, we should add the following snippet inside the `beforeEach()` that will name the test and assign it to a proper test batch:

await driver.executeScript(

            'applitools:startTest',

            {

                'testName': expect.getState().currentTestName,

                'appName': APP_NAME,

                'batch': { "id": batch.getId() }

            }

        )

Lastly, we need to tell our automation when the test is done and what were its results. We will add the following code that sets the status of our test in the `afterEach()` hook:

await driver.executeScript('applitools:endTest', 

       { 'status': testStatus })

Now, our test is ready to be run on the Execution Cloud.

Running test

To run our test, we need to set the Applitools API key. We can do it in a terminal or have it set as a global variable:

export APPLITOOLS_API_KEY=[API_KEY]

In the above command, we need to replace [API_KEY] with the API key for our account. The key can be found in the Applitools Dashboard, as shown in this FAQ article.

Now, we need to navigate to the location where our tests are located and run the following npm test command in the terminal:

npm test

It will trigger the test suite that can be seen on the Applitools Dashboard:

Applitools Dashboard with Execution Cloud enabled

Execution Cloud in Action

It’s a well-known fact that apps go through a lifecycle. They get created, get bugs, change, and ultimately shut down. This ever-changing lifecycle of any app is what causes our tests to break. Whether it’s due to a bug or an accidental regression, it’s widespread for a test to fail after a change in an app.

Let’s say a developer working on a counter button component changes its class name to `button-count` from the original `button-counter`. There could be many reasons this change could happen, but nevertheless, these modifications to the production code are extremely common. 

What’s even more common is that the developer who made the change might forget or not find all the tests using the original class name, `button-counter`, to validate this component. As a result, these outdated tests would start failing, distracting us from investigating real production issues, which could significantly impact our users.

Execution Cloud and its self-healing capabilities were built specifically to address this problem. This cloud platform would be able to “self-heal” our tests that were previously running against a class name `button-counter`, and rather than failing these tests, the Execution Cloud would find another selector that hasn’t changed. With this highly scalable solution, our test coverage would remain the same and let us focus on correcting issues that are actually causing a regression in production.

Although we are running non-Eyes tests, the Applitools Dashboard still allows us to see several valuable materials, like a video recording of our test or to export WebDriver commands! 

Want to see more? Request a free trial of Applitools Execution Cloud.

Conclusion

Whether you are a small startup that prioritizes quick iterations, or a large organization that focuses on scale, Applitools Execution Cloud is a perfect choice for any scenario. It offers a reliable way for tests to become what they should be – the first line of defense in ensuring the best customer experience for our users.

With the self-healing capabilities of the Execution Cloud, we get to focus on real production issues that actively affect our customers. With this cloud platform, we are moving towards a space where tests don’t become something we accept as constantly failing or a detriment to our developer velocity. Instead, we treat our test coverage as a trusted companion that raises problems before our users do. 

With these functionalities, Applitools and its Execution Cloud quickly become a must-have for any developer workflow that can supercharge the productivity and efficiency of every engineering team.

The post Functional Testing’s New Friend: Applitools Execution Cloud appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Selenium 4: Chrome DevTools Protocol [What’s New] https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/ https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/#respond Sun, 02 Jul 2023 21:38:00 +0000 https://app14743.cloudwayssites.com/?p=24506 In this post, we will discuss one of the most anticipated features of Selenium 4 which is the new APIs for CDP (Chrome DevTools Protocol)! This addition to the framework provides a much greater control over the browser used for testing.

The post Selenium 4: Chrome DevTools Protocol [What’s New] appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In the previous post of the Selenium 4 blog series, we discussed some of the new features in Selenium 4.

In this post, we will discuss one of the most anticipated features of Selenium 4 which is the new APIs for CDP (Chrome DevTools Protocol)! This addition to the framework provides a much greater control over the browser used for testing.

I’ll share some of the capabilities of the Selenium 4 CDP APIs as well as practical use cases that can take our tests to the next level!

“The getDevTools() method returns the new DevTools object which allows you to send() the built-in Selenium commands for CDP. These commands are wrapper methods that make it cleaner and easier to invoke CDP functions.”

Shama ugale

But first, what is Chrome DevTools?

Introduction to Chrome DevTools

Chrome DevTools is a set of tools built directly into Chromium-based browsers like Chrome, Opera, and Microsoft Edge to help developers debug and investigate websites.

With Chrome DevTools, developers have deeper access to the website and are able to:

  • Inspect Elements in the DOM
  • Edit elements and CSS on the fly
  • Check and monitor the site’s performance
  • Mock geolocations of the user
  • Mock faster/slower networks speeds
  • Execute and debug JavaScript
  • View console logs
  • and so much more

Selenium 4 Chrome DevTools APIs

Selenium 4 has added native support for Chrome DevTools APIs. With these new APIs, our tests can now:

  • Capture and monitor the network traffic and performance
  • Mock geolocations for location-aware testing, localization, and internationalization
  • Change the device mode and exercise the responsiveness of the application

And that’s just the tip of the iceberg!

Selenium 4 introduces the new ChromiumDriver class, which includes two methods to access Chrome DevTools: getDevTools() and executeCdpCommand().

The getDevTools() method returns the new DevTools object which allows you to send() the built-in Selenium commands for CDP. These commands are wrapper methods that make it cleaner and easier to invoke CDP functions.

The executeCdpCommand() method also allows you to execute CDP methods but in a more raw sense. It does not use the wrapper APIs but instead allows you to directly pass in a Chrome DevTools command and the parameters for that command. The executeCdpCommand() can be used if there isn’t a Selenium wrapper API for the CDP command, or if you’d like to make the call in a different way than the Selenium APIs provide.

The Chromium-based drivers such as ChromeDriver and EdgeDriver now inherit from ChromiumDriver, so you also have access to the Selenium CDP APIs from these drivers as well.

Let’s explore how we can utilize these new Selenium 4 APIs to solve various use cases.

Simulating Device Mode

Most of the applications we build today are responsive to cater to the needs of the end users coming from a variety of platforms, devices like phones, tablets, wearable devices, desktops  and orientations.

As testers, we might want to place our application in various dimensions to trigger the responsiveness of the application.  

How can we use Selenium’s new CDP functionality to accomplish this?

The CDP command to modify the device’s metrics is Emulation.setDeviceMetricsOverride, and this command requires input of width, height, mobile, and deviceScaleFactor. These four keys are mandatory for this scenario, but there are several optional ones as well.

In our Selenium tests, we could use the DevTools::send() method using the built-in setDeviceMetricsOverride() command, however, this Selenium API accepts 12 arguments – the 4 that are required as well as 8 optional ones. For any of the 8 optional arguments that we don’t need to send, we can pass Optional.empty().

However, to streamline this a bit by only passing the required parameters, I’m going to use the raw executeCdpCommand() instead as shown in the code below.

View the code on Gist.

On line 19, I create a Map with the required keys for this command.

Then on line 26, I call the executeCdpCommand() method and pass two parameters: the command name as “Emulation.setDeviceMetricsOverride” and the device metrics Map with the parameters.

On line 27, I open the “Google” homepage which is rendered with the specifications I provided as shown in the figure below.

pasted image 0 8

With a solution like Applitools Eyes, we can not only rapidly test across these different viewports with these new Selenium commands, but also maintain any inconsistencies at scale. Eyes is intelligent enough to not report false positives for small, imperceivable changes in the UI resulting from different browsers and viewports.

Simulate Network Speed

Many users access web applications via handheld devices which are connected to wifi or cellular networks. It’s not uncommon to reach a weak network signal, and therefore a slower internet connection.

It may be important to test how your application behaves under such conditions where the internet connection is slow (2G) or goes offline intermittently.

The CDP command to fake a network connection is Network.emulateNetworkConditions. Information on the required and optional parameters for this command can be found in the documentation.

With access to Chrome DevTools, it becomes possible to simulate these scenarios. Lets see how.

View the code on Gist.

On line 21, we get the DevTools object by calling getDevTools(). Then we invoke the send() method to enable the Network, and then call send() again to pass in the built-in command Network.emulateNetworkConditions() and the parameters we’d like to send with this command.

Then finally, we open the Google homepage with the network conditions simulated.

Mocking Geolocation

Testing the location-based functionality of applications such as different offers, currencies, taxation rules, freight charges and date/time format for various geolocations is difficult because setting up the infrastructure for all of these physical geolocations is often not a cost-effective solution.

With mocking the geolocation, we could cover all the aforementioned scenarios and more.

The CDP command to fake a geolocation is Emulation.setGeolocationOverride. Information on the required and optional parameters for this command can be found in the documentation.

How can we achieve this with Selenium? Let’s walk through the sample code.

View the code on Gist.

After obtaining a DevTools object, we can use its send() method to invoke the Emulation.setGeolocationOverride command, sending the latitude, longitude, and accuracy.

After overriding the geolocation, we open mycurrentlocation.net and see that North Carolina is the detected geolocation.

pasted image 0 9

Capture HTTP Requests

With DevTools we can capture the HTTP requests the application is invoking and access the method, data, headers and lot more.

Lets see how to capture the HTTP requests, the URI and the request method with the sample code.

View the code on Gist.

The CDP command to start capturing the network traffic is Network.enable. Information on the required and optional parameters for this command can be found in the documentation.

Within our code on line 22, we use the DevTools::send() method to send the Network.enable CDP command to enable capturing network traffic.

On line 23, a listener is added to listen to all the requests made by the application. For each request captured by the application we then extract the URL with getRequest().getUrl() and the HTTP Method with getRequest().getMethod().

On line 29, we open Google’s homepage and on the console the URI and HTTP methods are printed for all the requests made by this page.

Once we are done capturing the requests, we can send the CDP command Network.disable to stop capturing the network traffic as shown on line 30.

Access Console logs

We all rely on logs for debugging and analysing the failures. While testing and working on an application with specific data or specific conditions, logs help us in debugging and capturing the error messages, giving more insights that are published in the Console tab of the Chrome DevTools.

We can capture the console logs through our Selenium scripts by calling the CDP Log commands as demonstrated below.

View the code on Gist.

Within our code on line 19, we use DevTools::send() to enable the console log capturing.

Then, we add a listener to capture all the console logs logged by the application. For each log captured by the application we then extract the log text with getText() and log level with getLevel() methods.

Finally, the application is opened and the console error logs published by the application are captured.

Capturing Performance Metrics

In today’s fast world while we are iteratively building software at such a fast pace, we should aim to detect performance bottlenecks iteratively too. Poor performing websites and slower loading pages make unhappy customers.

Can we validate these metrics along with our functional regression on every build? Yes, we can!

The CDP command to capture performance metrics is Performance.enable. Information for this command can be found in the documentation.

Lets see how it’s done with Selenium 4 and Chrome DevTools APIs.

View the code on Gist.

First, we create a session by calling the createSession() method from DevTools as on line 19.

Next, we enable DevTools to capture the performance metrics by sending the Performance.enable() command to send() as shown on line 20.

Once Performance capturing is enabled, we can open the application then send the Performance.getMetrics() command to send(). This will return a list of Metric objects which we then stream to get all of the names of the metrics captured as on line 25.

We then disable capturing the Performance on line 29 by sending the Performance.disable() command to send().

To see the metrics that we are interested in, we define a List called metricsToCheck then loop through this to print the metrics’ values.

Basic Authentication

Interacting with browser popups is not supported in Selenium, as it is only able to engage with DOM elements. This poses a challenge for pop-ups such as authentication dialogs.

We can bypass this by using the CDP APIs to handle the authentication directly with DevTools. The CDP command to set additional headers for the requests is Network.setExtraHTTPHeaders.

 Here’s how to invoke this command in Selenium 4.

View the code on Gist.

We begin by using the DevTools object to create a session and enable Network. This is demonstrated on lines 25-26.

Next, we open our website and then create the authentication header to send.

On line 35, we send the setExtraHTTPHeaders command to send() along with the data for the header. This is the part that will authenticate us and allow us to bypass the browser popup.

To test this out, we then click the Basic Authentication test link. If you try this manually, you’ll see the browser popup asking you to login. But since we sent the authentication header, we will not get this popup in our script.

Instead, we get the message “Your browser made it!”.

Summary

As you can see, Selenium has become a lot more powerful with the addition of the CDP APIs. We can now enhance our tests to capture HTTP network traffic, collect performance metrics, handle authentication, and emulate geolocations, time zones and device modes. As well as anything else that is possible within Chrome DevTools!

In the next post of this series, we will explore more new Selenium 4 features such as Observability and enhanced exceptions along with the brand new Selenium Grid 4.

Quick Answers

How can I use the CDP API to simulate different device modes in Selenium 4?

With the Emulation.setDeviceMetricsOverride command, you can set specific screen dimensions, mobile views, and other parameters to test responsive designs. This enables you to see how your application performs on different devices directly within your tests.

What’s the benefit of simulating network speeds in testing?

Simulating network speeds, such as slow 2G or intermittent connectivity, helps identify how an application behaves under various real-world conditions. This is especially valuable for mobile users who might experience inconsistent network connections.

How do I capture HTTP requests in Selenium 4 using CDP?

You can capture HTTP requests by enabling the network with Network.enable and adding a listener to log requests. This allows you to monitor request URLs, methods, and other details, which can help troubleshoot issues related to API calls or network performance.

Why would I use geolocation mocking in Selenium testing?

Geolocation mocking is useful for testing location-based functionality, like currency, offers, or other regional settings, without physically being in each location. This is achievable with the Emulation.setGeolocationOverride command in the CDP API.

What are some practical use cases of the CDP APIs in Selenium 4?

The CDP APIs can be used to simulate device modes, capture network traffic, monitor console logs, manage basic authentication popups, and measure performance metrics. These capabilities help create realistic testing scenarios and improve the reliability of your automated tests.

The post Selenium 4: Chrome DevTools Protocol [What’s New] appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/feed/ 0
Future-Proofing Your Test Automation Pipeline https://app14743.cloudwayssites.com/blog/future-proofing-your-test-automation-pipeline/ Fri, 27 Jan 2023 20:02:48 +0000 https://app14743.cloudwayssites.com/?p=46171 Learn how to future-proof your test automation pipeline with Cypress and Applitools by adding tests that run from GitHub Actions. In this article, we’ll share how to ensure your test...

The post Future-Proofing Your Test Automation Pipeline appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Cypress Heroes app homepage

Learn how to future-proof your test automation pipeline with Cypress and Applitools by adding tests that run from GitHub Actions. In this article, we’ll share how to ensure your test automation pipeline can scale while staying reliable and easy to maintain.

Automating different types of tests

To illustrate our different types of test automation, we’ll be using the example project Cypress Heroes. In this full-stack TypeScript app, users can take the following actions:

  • Log in with an email and password
  • Like heroes, which increments the hero’s number of fans
  • Hire heroes, which increments the hero’s number of saves
  • Manage hero profile information like name, superpowers, and price

ICYMI: Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see Ely Lucas from Cypress demo the example project.

End-to-end testing

Cypress is traditionally known for end-to-end testing. You automate user interactions for specific scenarios from start to finish in the browser, and then run functional assertions to check the state of elements at each step. End-to-end tests are hidden in an actual web server and hit the site just like a user would.

Measurable stats for code coverage of your end-to-end testing can act as a health metric for your website or app. Adding coverage reports to your automation pipeline as commits can help ensure you’re testing all parts of your code.

Component testing

If you’re using a component-based framework like React or Angular or a design system like Storybook, you can also do component testing to test UI components. In this example, we have a button component with a few tests that pass, the hero card test, and a test for the login form. These components are being mounted in isolation outside of your typical web server.

Think of component tests as “UI unit” tests. While they don’t give end-to-end coverage, they’re quick and easy to run.

API testing

For your back end, you’ll need to automate API tests. The example project is using a community-built plugin called cypress-plugin-api. This plugin provides an interface inside the Cypress app to test APIs. It’s really cool and it’s super fun, and it allows you to write tests that you would have to do manually in a tool like Postman.

Fun fact: Cypress Ambassador Filip Hric developed the cypress-plugin-api. Check out Filip’s Test Automation University courses.

The API tests in our example are in the separate server project. We can use the command npx cypress open, and then we can run those tests in Chrome. We can see all of our results that we’re getting the response of the status codes. We can view a post request, the headers that were sent, the headers that were returned, and other stuff that you normally get from a tool like Postman.

And it’s just baked into the app, which is really nice. Cypress is basically a web app that tests a web app. And so, you could extend Cypress as an app with things like this to help you do your testing and to have it all seamlessly integrated.

Running a pipeline with GitHub Actions

The example project uses GitHub Actions to set up the test automation pipeline. When working on smaller projects, it’s easy to have CI interactions baked into your repository, all in one place.

Configuring your GitHub Action

With GitHub Actions, you declare everything you need in a YAML file in the .github/workflows folder. Your actions become part of your repository and are covered by version control. If you make any changes, you can review them easily with a simple line-by-line diff. GitHub Actions make it easy to automate processes alongside other interactions you make with your repository. For example, if you open a pull request, you can have it automatically kick off your tests and do linting. You can even perform static code analysis before merging changes.

Some environment variables are set at the top of the YAML file. The API URL is what the client app uses to communicate to the API. The example app is hooked up to send test results to the Cypress Cloud. Those results can then be used for analytics, diagnostics, reporting, and optimizing our test workflows. The Cypress cloud also requires a GitHub token, so it can do things like correctly identify what pull request is being merged.

For those new to GitHub Actions: You can define environment variables per step in a job, but declaring them at the top helps you update them painlessly.

Running each of our tests

To keep things simple, there is only one job right now in this GitHub Action. First, it checks out the code straight from GitHub. Next, it builds the project using the Cypress GitHub Action. The Cypress GitHub Action does a few things for you like building your application or npm installing or yarn installing the dependencies.

Building first means that subsequent jobs don’t have to build the app again. We’ve set run test to false, which is a parameter to the Cypress GitHub Action, because we don’t want to run the tests here. We’ll be running the tests separately below.

We have our component tests in our GitHub Action. We tell it to install false, since we installed it up above. And then we run our custom test command, which opens Cypress in run mode and initiates component testing. This record tells the GitHub Action to send the results to Cypress Cloud.

And then we have to start the client and server. For both end-to-end tests and API tests, the application must be up and running component tests. For the end-to-end test and API tests, the example app is hitting live servers.

This run command will start both the React app and the Node server, and then it will run the end-to-end test. We’re telling it again to not install the dependency, since it was already installed. Then, we’re running the command to start the end-to-end testing. The wait command will wait to make sure that both the client URL and the API URL are both up and running before it will start the test. If the test starts before both URLs get up and running, you’ll have some tests fail.

Another thing that the Cypress GitHub Action does is that you have the option to wait for these services to be live before the testing starts. By default, the npm run test commands are going to use the Chromium browser built into Electron. If you want to test on other browsers, you must make sure those browsers are installed on the runner. Cypress provides Docker images that you can add to your configuration to download the different browsers. However, downloading additional browsers increases the file size and makes the runs take longer.

Make sure that the Cypress binary itself is downloaded and installed. It’s going to run headlessly. This is because the command set up in these scripts is run mode, which is headless, whereas open mode is with the UI.

And then it will run the API test, which is very similar to end-to-end tests, except that since we’re not hitting the actual client app – only hitting the API app – we’re only waiting to make sure that the API URL is up and running.

If you write test cases per the local database for end-to-end testing before pushing to GitHub Actions, someone else running those test cases on their system could potentially fail. In whatever kind of test automation you develop, you’ll need to handle test data properly to avoid collisions. There are many different strategies you can follow. For more information on this and solving sample data dependency, watch my talk Managing the Test Data Nightmare.

How long do the test suites take?

When running your tests with Cypress and GitHub Actions, the results are uploaded to Cypress Cloud. You can go into Cypress Cloud and actually watch replays of all these tests that happened. The entire pipeline run in the example was 3 minutes and 50 seconds for all three test suites.

The individual test suites we ran took the following times:

  • Component tests: 49 seconds
  • End-to-end tests: 1 minute and 18 seconds
  • API tests: 13 seconds

Improving test coverage with visual assertions

Since all the Cypress tests are run inside of the browser window, you can visually see them and inspect to make sure that they’re looking correctly. But this type of review is a manual step. If someone accidentally makes a change to the stylesheet, the site could no longer be running properly, but if we run the tests, they’ll pass.

We can use Applitools Eyes to fix this issue.

Visual testing is meant to automate the things that traditional automation is not so good at. For example, as long as particular IDs on your page are in the DOM somewhere, your traditional automation scripts with something like Cypress are still going to find and interact with the elements. Applitools Eyes uses visual AI to look at an app and be able to detect these kinds of visual differences that traditional assertions struggle to capture. Let’s add some visual snapshots to these end-to-end tests.

Adding Applitools to your project

First, you’ll need an Applitools account. You can register a free Applitools account with your GitHub username or your email, and you’ll be good to go. You’ll need your Applitools API key for when we run tests with Applitools.

Next, we’ll need to install the Applitools SDK using npm install @applitools/eyes-cypress.

It can be a dev dependency or it can be a regular dependency – whichever you prefer. In the example project, we use a dev dependency. In the example project, we’re using the Applitools Eyes SDK for Cypress, we have Applitools SDKs for basically every tool framework you got.
Next, we’ll need to create an Applitools configuration file. Where in Cypress projects, you have your cypress.config.js file, basically we want one that’s called applitools.config.js.

Configuring your Applitools runner

In the Applitools config file, we will specify the configuration for running visual tests. There’s a separation between declaring configuration and actually adding test steps.
One of the settings we want is called batchName, and we’re going to set that to “cy heroes visual tests” to reflect the name of our demo app. The batch name will appear in the Eyes Test Manager (or the Applitools “dashboard”) after we run our visual tests.

Next, we’ll set the browsers. This will be a list, with each item being an entry that specifies a browser configuration, including name, width, and height.

Typically, since Cypress runs inside of an Electron app, it can be challenging to test mobile browsers. However, the Applitools Ultrafast Grid enables us to render our visual snapshots on mobile devices. The settings for mobile devices are going to be a bit different than those for browsers. Instead of having a name, we’re going to have a device name.

Our applitools.config.js file is complete. When we run our tests – either locally or in the GitHub Action – Applitools will render the snapshots it captures on these four browser configurations in the Ultrafast Grid and report results using the batch name. Furthermore, the local platform doesn’t matter. Even if you run this test on Windows, the Ultrafast Grid can still render snapshots on Safari and mobile emulators. A snapshot is just going to be a capture of that full page. Applitools will do the re-rendering with the appropriate size, the appropriate browser configuration, and all that will happen in the cloud. Essentially you can do multi-browser and multi-platform testing with simple declarations.

Now that we have completed the configuration, let’s update the tests to capture visual snapshots, starting with the homepage.

Setting up our test suites

You need to make sure that your tests aren’t interfering with other tests. In these tests, we’re going through and modifying some of the heroes that are in the application. The state of the application changes per test, so to get around that, we’re creating a new hero just for working with our tests and deleting the hero after the tests.

In the example, we’re using Cypress tasks, which is code that actually runs on the Node process part of Cypress. It’s directly communicating with our database to add the hero, delete the hero, and all the other types of setup tasks that we want to do before we actually run our test.

So it’s going to happen for each of the tests, and then we’re visiting the homepage and getting access to the hero.

We get our new hero and then we call cy.deleteHero, which is going to call the database to delete the hero. From the describe block at the start of every test, we get our hero. And then, finally, we have the hero card by its name, and we find the button that has the right selectors, so we can actually select it and click the button.

This test is making sure that you’re logged in before you can like this hero. We’re making sure that the modal popped up, clicking the okay on the modal, and then making sure that modal disappears and does not exist anymore.

Down below we have another suite for when a normal user is logged in. And so we’re using a custom Cypress command to log in with this username and password. You can define these custom commands that are like making your own function, encapsulating a little bit of logic so that it could be reusable.

So what we’re doing to test the login is going to the homepage, running the login process, and verifying the login was actually successful. The cy.session is caching a session for us to restore the session later from cookies. This helps speed up your test so you’re not having to go through the whole flow of actually logging in again.

We have another suite here for when an admin user is logged in, because an admin user can edit users and delete heroes.

 In the example, negative login tests – where you use the wrong username and/or password – are under the component tests.

In the login form component test, when an email and password is invalid, an error should show. The example uses cy.intercept to mock the API request that goes to the cert, which goes to the off endpoint and returns a status code 401, which represents an invalid login.

You can either write a component test or an end-to-end test. In this case, a component test makes it easier to set up the mock data.

Adding a call to Applitools Eyes

With the test suites set up, we’re ready to add some visual snapshots here. We need to call an Eyes session using the Applitools Eyes SDK. The idea is that we open our eyes, and we can take visual snapshots. And then at the end of the test, we will close our eyes to say that we’ve captured all the snapshots for that session or for that test. And at that point, Applitools Eyes will upload the snapshots that are captured to the Applitools Eyes server, do all of the re-rendering of the things of those four browsers in the Ultrafast Grid. Then we can log into the Applitools dashboard and we can see exactly what happened with our testing.
To get the autocomplete for Eyes commands, we need to set up the Applitools Eyes stuff with the Cypress project. We already did npm install on the package, so we’ll need to run npx eyes-setup.

We’ll want to use the command cy.eyesOpen in the homepage describe block under the beforeEach method. We want to pass an app name and test name for logging and reporting purposes. You might also put their Cypress eyes open code in the beforeEach of the test cases, so the call doesn’t need to be duplicated.

Then, in the afterEach block, you’ll call cy.eyesClose.

In this test, you must log in, make sure that the modal pops up, log in, and then click okay in the modal and make sure the modal disappears, so we’ll need a snapshot when the modal is up and one when the modal goes away. In this case, we’ll capture the whole window.

If we didn’t want to capture everything, we could actually capture a region, like a div or even an individual element. On a small scale, using the region option does not make a measurable difference in execution speed, but it gives you a way to tune the type of snapshot we want.

For capturing the next step, we can basically copy the whole call there and paste it, changing the tag to homepage with the modal dismissed.

These snapshots are very straightforward to write, and something that we could consider is that some of those other assertions you might arguably be able to remove. The visual snapshot is going to capture everything on that window, so if it’s there and visible, we’re going to capture it and track it over time.

You would still need to keep all of your interactions, but you can remove most of your assertions checking visible elements. However, there are certainly things where if you want to check a very specific numeric value, you still want to keep those assertions.

Running the updated tests

All we need to do to run this test is make sure that we have our Applitools API key from our account saved as an environment variable of the Cypress application.

Note: If you happen to steal someone’s API key, it doesn’t really help you. It just means they’ll see your results, and you won’t. API keys should be kept secret and safe.

Using the Applitools Eyes dashboard

So to see the visual testing results, we will need to view them in the Applitools Eyes dashboard.

You can view your test results in a grid to see the UI quickly, or you can view your results in a list to see your configurations quickly.

On the left, you’ve got the batch name that was set. Then on the main part of the body, you’ll see there are actually four tests. We only wrote one test, but each test is run once per browser configuration we specified, providing cross-browser and cross-platform testing without additional steps.

If we open up the snapshots, you can see the two snapshots that we captured. These results are new, because this is the first time we’ve run the test.

We’ve established the snapshot as a baseline image, meaning anything in the future will be checked against that.

That’s where that visual aspect of the testing comes in. Your Cypress results will essentially tell you if it was bare bones basic functional, and then Eyes will tell you what it actually looked like. You get richer results together.

Resolving test results in the dashboard

Let’s see what this looks like if we make that visual change.

In the main file, we’ve updated the stylesheet and run the test again. There is no need to do anything in the Applitools Eyes dashboard before re-running the test.

The new test batch is in an unresolved state because Eyes detected a visual difference. In theory, a visual difference could be good or bad. You could be making an intentional change. Visual AI is basically a change detector that makes it obvious to you, the human, to decide what is good or bad. Then anytime Applitools Eyes sees the same kind of passing or failing behavior in the future, it’ll remember.

It’s important to note that the unresolved test results won’t stop your test automation job or your automation flow. Test automation would complete normally. You as the human tester would review visual test results in the Eyes Test Manager (the “dashboard”) afterwards. The pipeline would not wait for you to manually mark visual test results.

Let’s open up one of those snapshots so we can see it full screen.

In the upper left, below the View menu in the ribbon, there’s a dropdown to show both so that you can see the baseline and test side by side.

In the example, we had removed the stylesheet, so we can see very clearly that it’s very different. It’s not always this obvious. In this case, pretty much the whole screen is different. But if it were like a single button that was missing or something shunted a little bit, it would show that a specific area was different. That’s the power of the visual AI check.

Whenever Applitools detects a visual change, you can mark it as “passing” with a thumbs-up. Then that snapshot automatically becomes the new baseline against which future checkpoints are compared. Applitools will go to the background and track similar images. And it will automatically update those appropriately as well.

Note: If you ever want to “reset” snapshots, you can also delete the baselines and run your tests “fresh” as if for the first time. The snapshots they capture will automatically become new baseline images.

Once we’ve resolved all test results, we’ll need to save. And now if we were to rerun our test again, Applitools Eyes would see the new snapshots and pass tests as appropriate. If you have dynamic content or test data, you add region annotations, which will ignore anything in the region box.
It is possible to compare your production and staging environments. You can use our GitHub Integration to manage different branches or versions of your application. We also support different baselines for A/B testing.

Closing thoughts

That’s basically how you would do visual testing with Applitools and Cypress. There are two big points to remember if you want to add visual testing to your own test suites:

  • To get these tests running in your pipeline, the only change you’d have to make is to inject the Applitools API key in those environment variables.
  • We didn’t really add a fourth suite of tests. Visual testing is more of a technique or an aspect of testing, not necessarily its own category of tests. All you have to do is work in the SDK, capture some snapshots, and you’re good to roll.

We hope this guide has helped you to build out your test automation pipeline to be more reliable and scalable. If you liked the guide, check out our Applitools tutorials for other guides on building your test automation pipeline. Watch the on-demand recording of Future-Proofing Your Automation Pipeline to see the full walkthrough. To keep up-to-date with test automation, you can peruse our latest courses taught by industry-leading testing experts on Test Automation University. Happy testing!

The post Future-Proofing Your Test Automation Pipeline appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What’s New in Cypress 12 https://app14743.cloudwayssites.com/blog/whats-new-in-cypress-12/ Tue, 10 Jan 2023 17:56:27 +0000 https://app14743.cloudwayssites.com/?p=45657 Right before the end of 2022, Cypress surprised us with their new major release: version 12. There wasn’t too much talk around it, but in terms of developer experience (DX),...

The post What’s New in Cypress 12 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Cypress 12 is here

Right before the end of 2022, Cypress surprised us with their new major release: version 12. There wasn’t too much talk around it, but in terms of developer experience (DX), it’s arguably one of their best releases of the year. It removes some of the biggest friction points, adds new features, and provides better stability for your tests. Let’s break down the most significant ones and talk about why they matter.

No more “detached from DOM” errors

If you are a daily Cypress user, chances are you have seen an error that said something like, “the element was detached from DOM”. This is often caused by the fact that the element you tried to select was re-rendered, disappeared, or detached some other way. With modern web applications, this is something that happens quite often. Cypress could deal with this reasonably well, but the API was not intuitive enough. In fact, I listed this as one of the most common mistakes in my talk earlier this year.

Let’s consider the example from my talk. In a test, we want to do the following:

  1. Open the search box.
  2. Type “abc” into the search box.
  3. Verify that the first result is an item with the text “abc”.

As we type into the search box, an HTTP request is sent with every keystroke. Every response from that HTTP request then triggers re-rendering of the results.

The test will look like this:

it('Searching for item with the text "abc"', () => {
 
 cy.visit('/')
 
 cy.realPress(['Meta', 'k'])
 
 cy.get('[data-cy=search-input]')
   .type('abc')
 
 cy.get('[data-cy=result-item]')
   .first()
   .should('contain.text', 'abc')
 
})

The main problem here is that we ignore the HTTP requests that re-render our results. Depending on the moment when we call cy.get() and cy.first() commands, we get different results. As the server responds with search results (different with each keystroke), our DOM is getting re-rendered, making our “abc” item shift from second position to first. This means that our cy.should() command might make an assertion on a different element than we expect.

Typically, we rely on Cypress’ built-in retry-ability to do the trick. The only problem is that the cy.should() command will retry itself and the previous command, but it will not climb up the command chain to the cy.get() command.

It is fairly easy to solve this problem in versions v11 and before, but the newest Cypress update has brought much more clarity to the whole flow. Instead of the cy.should() command retrying only itself and the previous command, it will retry the whole chain, including our cy.get() command from the example.

In order to keep retry-ability sensible, Cypress team has split commands into three categories:

  • assertions
  • actions
  • queries

These categories are reflected in Cypress documentation. The fundamental principle brought by version 12 is that a chain of queries is retried as a whole, instead of just the last and penultimate command. This is best demonstrated by an example comparing versions:

// Cypress v11:
cy.get('[data-cy=result-item]') // ❌ not retried
 .first() // retried
 .should('contain.text', 'abc') // retried
 
// Cypress v12:
cy.get('[data-cy=result-item]') // ✅ retried
 .first() // retried
 .should('contain.text', 'abc') // retried

cy.get() and cy.first() are commands that both fall into queries category, which means that they are going to get retried when cy.should() does not pass immediately. As always, Cypress is going to keep on retrying until the assertion passes or until a time limit runs up.

cy.session() and cy.origin() are out of beta

One of the biggest criticisms of Cypress.io has been the limited ability to visit multiple domains during a test. This is a huge blocker for many test automation engineers, especially if you need to use a third-party domain to authenticate into your application.

Cypress has advised to use programmatic login and to generally avoid trying to test applications you are not in control of. While these are good advice, it is much harder to execute them in real life, especially when you are in a hurry to get a good testing coverage. It is much easier (and more intuitive) to navigate your app like a real user and automate a flow similar to their behavior.

This is why it seems so odd that it took so long for Cypress to implement the ability to navigate through multiple domains. The reason for this is actually rooted in how Cypress is designed. Instead of calling browser actions the same way as tools like Playwright and Selenium do, Cypress inserts the test script right inside the browser and automates actions from within. There are two iframes, one for the script and one for the application under test. Because of this design, browser security rules limit how these iframes interact and navigate. Laying grounds for solving these limitations were actually present in earlier Cypress releases and have finally landed in full with version 12 release. If you want to read more about this, you should check out Cypress’ official blog on this topic – it’s an excellent read.

There are still some specifics on how to navigate to a third party domain in Cypress, best shown by an example:

it('Google SSO login', () => {
 
 cy.visit('/login') // primary app login page
 
 cy.getDataCy('google-button')
   .click() // clicking the button will redirect to another domain
 
 cy.origin('https://accounts.google.com', () => {
   cy.get('[type="email"]')
     .type(Cypress.env('email')) // google email
   cy.get('[type="button"]')
     .click()
   cy.get('[type="password"]')
     .type(Cypress.env('password')) // google password
   cy.get('[type="button"]')
     .click()
 })
 
 cy.location('pathname')
   .should('eq', '/success') // check that we have successfully
 
})

As you see, all the actions that belong to another domain are wrapped in the callback of cy.origin() command. This separates actions that happen on the third party domain.

The Cypress team actually developed this feature alongside another one that came out from beta, cy.session(). This command makes authenticating in your end-to-end tests much more effective. Instead of logging in before every test, you can log in just once, cache that login, and re-use it across all your specs. I recently wrote a walkthrough of this command on my blog and showed how you can use it instead of a classic page object.

This command is especially useful for the use case from the previous code example. Third-party login services usually have security measures in place that prevent bots or automated scripts from trying to login too often. If you attempt to login too many times, you might get hit with CAPTCHA or some other rate-limiting feature. This is definitely a risk when running tens or hundreds of tests.

it('Google SSO login', () => {
 
 cy.visit('/login') // primary app login page
 cy.getDataCy('google-button')
   .click() // clicking the button will redirect to another domain
 
 cy.session('google login', () => {
   cy.origin('https://accounts.google.com', () => {
     cy.get('[type="email"]')
       .type(Cypress.env('email')) // google email
     cy.get('[type="button"]')
       .click()
     cy.get('[type="password"]')
       .type(Cypress.env('password')) // google password
     cy.get('[type="button"]')
       .click()
   })
 })
 
 cy.location('pathname')
   .should('eq', '/success') // check that we have successfully
 
})

When running a test, Cypress will make a decision when it reaches the cy.session() command:

  • Is there a session called google login anywhere in the test suite?
    • If not, run the commands inside the callback and cache the cookies, local storage, and other browser data.
    • If yes, restore the cache assigned to a session called “google login.”

You can create multiple of these sessions and test your application using different accounts. This is useful if you want to test different account privileges or just see how the application behaves when seen by different accounts. Instead of going through the login sequence through UI or trying to log in programmatically, you can quickly restore the session and reuse it across all your tests.

This also means that you will reduce your login attempts to a minimum and prevent getting rate-limited on your third party login service.

Run all specs in GUI

Cypress GUI is a great companion for writing and debugging your tests. With the version 10 release, it has dropped support for the “Run all specs” button in the GUI. The community was not very happy about this change, so Cypress decided to bring it back.

The reason why it was removed in the first place is that it could bring some unexpected results. Simply put, this functionality would merge all your tests into one single file. This can get tricky especially if you use before(), beforeEach(), after() and afterEach() hooks in your tests. These would often get ordered and stacked in unexpected order. Take following example:

// file #1
describe('group 1', () => {
 it('test A', () => {
   // ...
 })
})
 
it('test B', () => {
 // ...
})
 
// file #2
before( () => {
 // ...
})
 
it('test C', () => {
 // ...
})

If this runs as a single file, the order of actions would go like this:

  • before() hook
  • test B
  • test C
  • test A

This is mainly caused by how Mocha framework executes blocks of code. If you properly wrap every test into describe() blocks, you would get much less surprises, but that’s not always what people do.

On the other hand, running all specs can be really useful when developing an application. I use this feature to get immediate feedback on changes I make in my code when I work on my cypress plugin for testing API. Whenever I make a change, all my tests re-run and I can see all the bugs that I’ve introduced. ?

Running all specs is now behind an experimental flag, so you need to set experimentalRunAllSpecs to true in your cypress.config.js configuration file.

Test isolation

It is always a good idea to keep your tests isolated. If your tests depend on one another, it may create a domino effect. First test will make all the subsequent tests fail as well. Things get even more hairy when you bring parallelisation into the equation.

You could say that Cypress is an opinionated testing framework, but my personal take on this is that this is a good opinion to have. The way Cypress enforces test isolation with this update is simple. In between every test, Cypress will navigate from your application to a blank page. So in addition to all the cleaning up Cypress did before (clearing cookies, local storage), it will now make sure to “restart” the tested application as well.

In practice the test execution would look something like this:

it('test A', () => {
 cy.visit('https://staging.myapp.com')
 // ...
 // your test doing stuff
})
 
// navigates to about:blank
 
it('test B', () => {
 cy.get('#myElement') // nope, will fail, we are at about:blank
})

This behavior is configurable, so if you need some time to adjust to this change, you can set testIsolation to false in your configuration.

Removing of deprecated commands and APIs

Some of the APIs and commands reached end of life with the latest Cypress release. For example, cy.route() and cy.server() have been replaced by the much more powerful cy.intercept() command that was introduced back in version 6.

The more impactful change was the deprecation of Cypress.Cookies.default() and Cypress.Cookies.preserveOnce() APIs that were used for handling the behavior of clearing up and preserving cookies. With the introduction of cy.session(), these APIs didn’t fit well into the system. The migration from these commands to cy.session() might not seem as straightforward, but it is quite simple when you look at it.

For example, instead of using Cypress.Cookies.preserveOnce() function to prevent deletion of certain cookies you can use cy.session() like this:

beforeEach(() => {
 cy.session('importantCookies', () => {
   cy.setCookie('authentication', 'top_secret');
 })
});
 
it('test A', () => {
 cy.visit('/');
});
 
it('test B', () => {
 cy.visit('/');
});

Also, instead of using Cypress.Cookies.defaults() to set up default cookies for your tests, you can go to your cypress/support/e2e.js support file and set up a global beforeEach() hook that will do the same as shown in the previous example.

Besides these there were a couple of bug fixes and smaller tweaks which can all be viewed in Cypress changelog. Overall, I think that the v12 release of Cypress is one of the unsung heroes. Rewriting of query commands and availability of cy.session() and cy.origin() commands may not seem like a big deal on paper, but it will make the experience much smoother than it was before.

New command queries might require some rewriting in your tests. But I would advise you to upgrade as soon as possible, as this update will bring much more stability to your tests. I’d also advise to rethink your test suite and integrate cy.session() to your tests as it might not only handle your login actions more elegantly but shave off minutes of your test run.

If you want to learn more about Cypress, you can come visit my blog, subscribe to my YouTube channel, or connect with me on Twitter or LinkedIn.

The post What’s New in Cypress 12 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
UI Testing: A Getting Started Guide and Checklist https://app14743.cloudwayssites.com/blog/ui-testing-guide/ Thu, 01 Sep 2022 20:32:38 +0000 https://app14743.cloudwayssites.com/?p=42155 Learn everything you need to know about how to perform UI testing, why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

The post UI Testing: A Getting Started Guide and Checklist appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Learn everything you need to know about how to perform UI testing, including why it’s important, a demo of a UI test, and tips and tricks to make UI testing easier.

When users explore web, mobile or desktop applications, the first thing they see is the User Interface (UI). As digital applications become more and more central to the way we all live and work, the way we interact with our digital apps is an increasingly critical part of the user experience.

There are many ways to test an application: Functional testing, regression testing, visual testing, cross-browser testing, cross-device testing and more. Where does UI testing fit into this mix?

UI testing is essential to ensure that the usability and functionality of an application performs as expected. This is critical for delivering the kinds of user experiences that ensure an application’s success. After all, nobody wants to use an app where text is unreadable, or where buttons don’t work. This article will explain the fundamentals of UI testing, why it’s important, and supply a UI testing checklist and examples to help you get started.

What is UI Testing?

UI testing is the process of validating that the visual elements of an application perform as expected. In UI Testing, graphical components such as text, radio buttons, checkboxes, buttons, colors, images and menus are evaluated against a set of specifications to determine if the UI is displaying and functioning correctly.

Why is UI Testing Important?

UI testing is an important way to ensure an application has a reliable UI that always performs as expected. It’s critical for catching visual and even functional bugs that are almost impossible to detect using other kinds of testing.

Modern UI testing, which typically utilizes visual testing, works by validating the visual appearance of an application, but it does much more than make sure things simply look correct. Your application’s functionality can be drastically affected by a visual bug. UI testing is critical for verifying the usability of your UI.

Note: What’s the difference between UI testing and GUI testing? Modern applications are heavily dependent on graphical user interfaces (GUIs). Traditional UI testing can include other forms of user interfaces, including CLIs, or can use DOM-based coded locators to try and verify the UI rather than images. Modern UI testing today frequently involves visual testing.

Let’s take an example of a visual bug that slipped into production from the Southwest Airlines website:

Visual Bug on Southwest Airlines App
Visual Bug on Southwest Airlines App

Under a traditional functional testing approach this would pass the test suite. All the elements are present on the page and successfully loaded. But for the user, it’s easy to see the visual bug. 

This does more than deliver a negative user experience that may harm your brand. In this example, the Terms and Conditions are directly overlapping the ‘continue’ button. It’s literally impossible for the user to check out and complete the transaction. That’s a direct hit to conversions and revenue.

With good UI testing in place, bugs like these will be caught before they become visible to the user.

UI Testing Approaches

Manual Testing

Manual UI testing is performed by a human tester, who evaluates the application’s UI against a set of requirements. This means the manual tester must perform a set of tasks to validate that the appearance and functionality of every UI element under test meets expectations. The downsides of manual testing are that it is a time-consuming process and that test coverage is typically low, particularly when it comes to cross-browser or cross-device testing or in CI/CD environments (using Jenkins, etc.). Effectiveness can also vary based on the knowledge of the tester.

Record and Playback Testing

Record and Playback UI testing uses automation software and typically requires limited or no coding skill to implement. The software first records a set of operations executed by a tester, and then saves them as a test that can be replayed as needed and compared to the expected results. Selenium IDE is an example of a record and playback tool, and there is even one built directly into Google Chrome.

Model-Based Testing

Model-based UI testing uses a graphical representation of the states and transitions that an application may undergo in use. This model allows the tester to better understand the system under test. That means tests can be generated and potentially automated more efficiently. In its simplest form, the approach requires the steps below:

  1. Build a model representing the system
  2. Determine the inputs
  3. Understand the expected outputs
  4. Execute the tests and compare the results against expectations

Automated UI Testing vs Manual UI Testing

Benefits of Manual UI Testing

Manual testing, as we have seen above, has a few severe limitations. Because the process relies purely on humans performing tasks one at a time, it is a slow process that is difficult to scale effectively. Manual testing does, however, have advantages:

  • Manual testing can potentially be done with little to no tooling, and may be sufficient for early application prototypes or very small apps. 
  • An experienced manual tester may be able to discover bugs in edge-cases through ad-hoc or exploratory testing, as well as intuitively “feel” the user experience in a way that is difficult to understand with a scripted test.

Benefits of Automated UI Testing

In most cases automation will help testing teams save time by executing pre-determined tests repeatedly. Automation testing frameworks aren’t prone to human errors and can run continuously. They can be parallelized and executed easily at scale. With automated testing, as long as tests are designed correctly they can be run much more frequently with no loss of effectiveness. 

Automation testing frameworks may be able to increase efficiency even further with specialized capabilities for things like cross-browser testing, mobile testing, visual AI and more.

UI Testing Checklist of Test Cases

On the surface, UI testing is simple – just make sure everything “looks” good. Once you poke beneath that surface, testers can quickly find themselves encountering dozens of different types of UI elements that require verification. Here is a quick checklist you can use to make sure you’ve considered all the most common items.

UI Testing Checklist – Common Tests

  • Text: Can all text be read? Is the contrast legible? Is anything covered by another element?
  • Forms, Fields and Pickers: Are all text fields be visible, and can text be entered and submitted? Do all dropdowns display correctly? Are validation requirements (such as a date in a datepicker) upheld?
  • Navigation and Sorting: Whether it’s a site menu, a sortable table or a multi-page form, can the user navigate via the UI? Do all dropdowns display? Can all options be clicked/tapped, and do they have the desired effect? 
  • Buttons and Links: Are all buttons and links visible? Are they formatted consistently? Can they be selected, and do they take the user to the intended pages?
  • Responsiveness: When you adjust the resolution, do all of the above UI elements continue to behave as intended?

Each of the above must be tested across every page, table, form and menu that your application contains. 

It’s also a good practice to test the UI for specific critical end-to-end user journeys. For example, making sure that it’s possible to journey smoothly from: User clicks Free Trial Signup (Button) > User submits Email Address (Form) > User Logs into Free Trial (Form) > User has trial access (Product)

Challenges of UI Testing

UI testing can be a challenge for many reasons. With the proper tooling and preparation these challenges can be overcome, but it’s important to understand them as you plan your UI testing strategy.

  • User Interfaces are complex: As we’ve discussed above, there are numerous distinct elements on each page that must be tested. Embedded forms, iFrames, dropdowns, tables, images, videos and more must all be tested to be sure the UI is working as intended.
  • User Interfaces change fast: For many applications the UI is in a near-constant state of flux, as frequent changes to the text, layout or links are implemented. Maintaining full coverage is challenging when this occurs.
  • User Interfaces can be slow: Testing the UI of an application can take time, especially compared to smaller and faster tests like unit tests. Depending on the tool you are using, this can make them feel difficult to run as regularly.
  • Testing script bottlenecks: Because the UI changes so quickly, not only do testers have to design new test cases, but depending on your tooling, you may have to constantly create new coded test scripts. Testing tools with advanced capabilities, like the Visual AI in Applitools, can mitigate this by requiring far less code to deliver the same coverage.

UI Testing Example

Let’s take an example of an app with a basic use case, such as a login screen.

Even a relatively simple page like this one will have numerous important test cases (TC):

  • TC 1: Is the logo at the top appropriate for the screen, and aligned with brand guidelines?
  • TC 2: Is the title of the page displaying correctly (font, label, position)?
  • TC 3: Is the dividing line displaying correctly? 
  • TC 4: Is the Username field properly labeled (font, label, position)?
  • TC 5: Is the icon by the Username field displaying correctly?
  • TC 6: Is the Username text field accepting text correctly (validation, error messages)?
  • TC 7: Is the Password field properly labeled (font, label, position)?
  • TC 8: Is the icon by the Password field displaying correctly?
  • TC 9: Is the Password text field accepting text correctly (validation, error messages)?
  • TC 10: Is the Log In button text displaying correctly (font, label, position)?
  • TC 11: Is the Log In button functioning correctly on click (clickable, verify next page)
  • TC 12: Is the Remember Me checkbox title displaying correctly (font, label, position)?
  • TC 13: Is the Remember Me checkbox functioning correctly on click (clickable, checkbox displays, cookie is set)?

Simply testing each scenario on a single page can be a lengthy process. Then, of course, we encounter one of the challenges listed above – the UI changes quickly, requiring frequent regression testing

How to Simplify UI Testing with Automation

Performing this regression testing manually while maintaining the level of test coverage necessary for a strong user experience is possible, but would be a laborious and time-consuming process. One effective strategy to simplify this process is to use automated tools for visual regression testing to verify changes to the UI.

Benefits of Automated Visual Regression Testing for UI Testing

Visual regression testing is a method of ensuring that the visual appearance of the application’s UI is not negatively affected by any changes that are made. While this process can be done manually, modern tools can help you automate your visual testing to verify far more tests far more quickly.

Automated Visual UI Testing Example

Let’s return to our login screen example from earlier. We’ve verified that it works as intended, and now we want to make sure any new changes don’t negatively impact our carefully tested screen. We’ll use automated visual regression testing to make this as easy as possible.

  1. As we saw above, our baseline screen looks like this:
  2. Next, we’ll make a change by adding a row of social buttons. Unfortunately, this will have the effect of inadvertently rendering our login button unusable by pushing it up into the password field:
  3. We’ll use our automated visual testing tool to evaluate our change against the baseline. In our example, we’ll use a tool that utilizes Visual AI to highlight only the relevant areas of change that a user would notice. The tool would then bring our attention to the new social buttons along with the section around the now unusable button as areas of concern.
  4. A test engineer will then review the comparison. Any intentional changes that were flagged are marked as accepted changes. On some screens we might expect changes in certain dynamic areas, and these can be flagged for Visual AI to ignore going forward.

    We need to address only the remaining areas that are flagged. In our example, every area flagged in red is problematic – we need to shift down the social buttons, and move the button out of the password field. Once we’ve done this, we run test again, and a new baseline is created only when everything passes. The final result is free of visual defects:

Why Choose Automated Visual Regression Testing with Applitools for UI Testing

Applitools has pioneered the best Visual AI in the industry, and it’s able to automatically detect visual and functional bugs just as a human would. Our Visual AI has been trained on billions of images with 99.9999% accuracy and includes advanced features to reduce test flakiness and save time, even across the most complicated test suites.

The Applitools Ultrafast Test Cloud includes unique features like the Ultrafast Grid, which can run your functional & visual tests once locally and instantly render them across any combination of browsers, devices, and viewports. Our automated maintenance capabilities make use of Visual AI to identify and group similar differences found across your test suite, allowing you to verify multiple checkpoint images at once and to replicate maintenance actions you perform for one step in other relevant steps within a batch.

You can find out more about the power of Visual AI through our free report on the Impact of Visual AI on Test Automation. Check out the entire Applitools platform and sign up for your own free account today.

Happy Testing!

Read More

The post UI Testing: A Getting Started Guide and Checklist appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Regression Testing? Definition, Tutorial & Examples https://app14743.cloudwayssites.com/blog/regression-testing-guide/ Fri, 01 Jul 2022 16:08:00 +0000 https://app14743.cloudwayssites.com/?p=33704 Learn everything you need to know about what regression testing is, best practices, how you can apply it in your own organization and much more.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In this detailed guide, learn everything you need to know about what regression testing is, along with best practices and examples. Learn how you can apply regression testing in your own organization and much more.

While regression testing is practiced in almost every organization, each team may have its own procedures and approaches. This article is a starter kit for organizations seeking a solid start to their regression testing strategy. It also assists teams in delving deeper into the missing links in their current regression testing technique in order to evolve their test strategy.

What is Regression Testing?

Regression testing is a type of software testing that verifies an application continues to work as intended after any code revisions, updates, or optimizations. As the application continues to evolve by adding new features, the team must perform regression testing to evaluate that the existing features work as expected and that there are no bugs introduced with the new feature(s). 

In this post, we will discuss various techniques for Regression Testing, and which to use depending on your team’s way of working. 

However, before we jump onto the how part, let us understand why having a regression test suite is essential.

Why Do We Need Regression Testing?

A software application gets directly modified due to new enhancements (functional, performance or even improved security), tweaks or changes to existing features, bug fixes, and updates. It is also indirectly affected by the third-party services it consumes to provide features through its interface. 

Changes in the application’s source code, both planned and unintended, demand verification. Additionally, the impact of modifications to external services used by the application should be verified.

Teams must ensure that the modified component of the application functions as expected and that the change had no adverse effect on the other sections of the application. 

A comprehensive regression testing technique aids the team in identifying regression issues, which are subsequently corrected and retested to ensure that the original faults are not present. 

Regression Testing Example

Let us quickly understand with the help of an example – Login functionality

  • A user can log into an app using either their username and password or their Gmail account via Google integration.
  • A new feature, LinkedIn integration, is added to enable users to log into the app using their LinkedIn account.
  • While it is vital to verify that LinkedIn login functions as expected, it is equally necessary to verify that other login methods continue to function (Form login and Google integration).

Smoke vs Sanity vs Regression Testing

People commonly use the terms smoke, sanity, and regression interchangeably in testing, which is misleading. These terms differ not only in terms of the application’s scope, but also in terms of when they are carried out. 

What is Smoke Testing?

Smoke testing is done at the outset of a fresh build. The main goal is to see if the build is good enough to start testing. Some examples include being able to launch the site by simply hitting in the URL, or being able to run the app after installing a new executable.

What is Sanity Testing?

Sanity testing is surface level testing on newly deployed environments. For instance, the features are broadly tested on staging environments before passing it on to User Acceptance Testing. Another example could be verifying that the fonts have correctly loaded on the web page, expected components are interactive and overall things appear to be in order without a detailed investigation.

How is Regression Testing Different from Smoke and Sanity Testing?

Regression testing has more depth where the potentially impacted areas are thoroughly tested on the environment where new changes have been introduced.

Existing stable features are rigorously tested on a regular basis to ensure their accuracy in the face of purposeful and unintended changes. 

Regression Testing Approaches

The techniques can be grouped into the following categories:

Partial Regression Testing

As the name suggests, partial regression testing is an approach where a subset of the entire regression suite is selected and executed as part of regression testing.

This subset selection results from a combination of several logical criteria as follows:

  • The cases derived from identifying the potentially affected feature(s) due to the change(s)
  • Business-critical cases
  • Most commonly used paths

Partial regression testing works excellently when the team successfully identifies the impacted areas and the corresponding test cases through proven ways like the Requirement Traceability Matrix (RTM henceforth) or any other form of metadata approved by the team.

The following situations are more conducive to partial regression testing:

  • The project has a solid test automation framework in place, with a large number of Unit, API, Integration tests, and Acceptance tests in proportion as per the test pyramid.
  • Changes to the source code are always being tracked and communicated within the cross-functional team.
  • Short-term projects tight on budget and time.
  • The same team members have been working on the project for a long period.

While this method is effective, it is possible to overlook issues if:

  • The impacted regions aren’t identified appropriately.
  • The changes aren’t conveyed to the entire team.
  • The team doesn’t religiously follow the process of documenting test scenarios or test cases.

Complete Regression Testing

In many cases, reasons like significant software updates, changes to the tech stack demand the team to perform comprehensive regression testing to uncover new problems or problems introduced due to the changes.

In this approach, the whole test suite is run to uncover issues every time new code is committed, or, at some agreed time intervals.

This is a significantly time-consuming approach compared to the other techniques and should ideally be adopted only when the situation demands.

To keep the feedback cycle faster, one must embrace automated testing to enable productive complete regression testing in their teams.

Which Regression Technique to Use?

Irrespective of the technique adopted, I always suggest that teams prioritize the most business-critical cases and the common use cases performed by end-users when it comes to execution. 

Remember, the main goal of regression testing is to ensure that the end-user is not impacted due to an unavailable/incorrect feature, which could affect business outcomes in many ways.

Best Practises for Regression Testing

To achieve better testing coverage of your application, plan your regression testing with a combination of technology and business scenarios. Apply the practices across the Test Pyramid. 

Leverage the Power of Visual Representation

Arranging the information in the form of a matrix enables the team to quickly identify the potentially impacted areas. 

  • The RTM shown in the diagram below, any changes made to REQ1 UC 1.3 will let us know that we have to test the test cases 1.1.2, 1.1.4 and 1.1.7.
  • Also, since test case 1.1.2 is also related to UC 1.2, we would immediately test that for any regression issues. 
  • Of course, the RTM should be up-to-date at all times for the technique to work correctly for the team.

    (Image Source)

Alternatively, many test case management tools now have started providing inbuilt support to build a regression test suite with the help of appropriate tags and modules. These tools also let you systematically track and identify patterns in the regression test execution to dig into more related areas.

I have seen teams being most effective when they have automated most of their regression suite, and the non-automatable tests organised and represented in a meaningful way that allows quick filtering and meaningful information.

Test Data

We should leverage the power of automation to create test data instantly across different test environments. We need to ascertain that the updated feature is evaluated against both old and new data. 

Ex: A new field added to a user profile, for example, should work consistently for both existing and newly formed accounts.

Production Data

Production test data plays a vital role in identifying issues that might have been missed during the initial delivery.

In cases where possible, replicate the production environment to identify edge cases and add those scenarios to the regression test suite.

Using production data isn’t always viable, and it can lead to non-compliance issues. Teams frequently conceal / mask sensitive information from production data and use the information to fulfil the requirement for on-the-ground scenario analysis.

Test Environments

If you have multiple environments, we should verify that the application works as intended in each of the environments.

Obtaining a Fresh Outlook

Every time a new person joined the team when the development was already in progress, they asked meaningful questions about the long-forgotten stable features. I also prefer young guns to be part of my regression team to get a raw and holistic testing perspective.

Automate

Automate the regression test suite! If you have the budget, great, or else, create supporting mechanisms to utilise the team’s idle time to implement automated tests.

Simply automating the business-critical scenarios or the most used workflows is a good enough start. Initiate this activity and work incrementally.

Either tag/annotate your automated scenarios as per the feature or segregate them into appropriate folders so that you’d be able to run particular automated regression scenarios.

Sequential execution won’t scale with a rising number of test environments and permutations, despite the fact that automated test execution is faster. As a result, concurrent test execution in various settings is required to meet scalability requirements. Selenium Grid and other cloud solutions like Applitools Ultrafast Test Cloud enable you to execute automated tests in parallel across different configurations.

In addition to adhering to best practises when creating the test automation framework, these tests must run at a high pace and in parallel to provide faster feedback.

Choose What Works for You

Always! One cannot ignore the business limitations and the client demands to meet the delivery. Based on your context, adopt the most suitable regression testing techniques.

Plan for Regression Testing in Sprints

I have seen it take a long time to automate a regression backlog. To keep making progress on this activity, while estimating the Sprint tasks, always account for regression testing efforts explicitly, or you might be increasing your technical debt in the form of uncovered bugs.

Communicate within the Cross-Functional Team

Changes are not always directly related to client needs, nor are they always conveyed. Internally, the development team continually optimises the code for reusability, performance, and other factors. Ensure that these source-code modifications are documented/tracked in a ticket so that the team can perform regression testing accordingly.

Regression Testing at Scale

An enterprise product results from multiple teams’ contributions across geographies. While the teams will independently conduct regression testing for their part, it mustn’t be done only in silos. The teams must also set up cadence structures/processes to test all integrational regression scenarios.

Crowdsourced Testing

Crowdsourced testing can help find brand new flaws in the programme, such as functionality, usability, and localization concerns, thereby improving the product’s quality.

Plan for Non-Functional Regression Testing

Non-functional elements like performance, security, accessibility, and usability must all be examined as part of your regression testing plan, in addition to functionality.

Benchmarking test execution results from past sessions and comparing them to test execution results after the most recent modifications is a simple but effective technique for detecting performance, accessibility, and other degradations.

Due to substantial faults in non-functional areas, applications with the best functionality have either been unable to see production through or have been shelved despite launching successfully.

In a similar vein, application security and accessibility issues have cost businesses millions of dollars in addition to a tarnished reputation.

The Need for an Automated Regression Test Suite

Regardless of your application architecture or development methodology, the importance of automating the regression tests can never fade away. Be it a small-scale application or an enterprise product, having automated tests will save you time, people’s energy and money in the longer run.

Let’s understand some reasons to automate the regression test suite:

Fast Feedback

Automated software verification is exponentially faster than humans. Automated continuous testing in the CI-CD pipeline is a powerful approach to identifying regression bugs as close to its introduction because of the increased speed and frequency at which it operates.

Equally important is to look at the test results from each automated suite execution and take meaningful steps to get the product and the test suite progressively better.

Timely identification of issues will avoid defect leakage in the most significant parts of the application and later stages of testing.

Consequently, the slight left shift always profits the organisation in many ways apart from cost.

Automated Test Data Creation

Before getting to the actual testing, the testing teams spend a significant amount of time generating test data. Automation aids not only in the execution of tests but also in the rapid generation of large amounts of test data. The functional testing team may leverage data generated by scripts (SQL, APIs), allowing them to focus on testing rather than worrying about the data.

Testing features like pagination, infinite scroll, tabular representation, performance of the app are few examples where rapid test data generation helps the team with instant test data. 

Banking and insurance are regulated sectors with several complex operations and subtleties. To exercise and address the data models and flows, a variety of test data is required. The ability to automate test data management has shown to be a critical component of successful testing.

Address Scalability

The automated test suite’s parallel execution answers the need for faster feedback and does it rapidly. Teams can generate test results across a variety of environments, browsers, devices, and operating systems with the right infrastructure and the prerequisite of having built a scalable automated test suite.

The Applitools Ultrafast Test Cloud is the next step forward in cross-browser testing. You run your functional and visual tests once locally using Ultrafast Grid (part of the Ultrafast Test Cloud), and the grid instantaneously generates all screens across whatever combination of browsers, devices, and viewports you choose. 

Use the Human Brain and Technology to Their Full Potential

Repetitive tasks are handled efficiently and consistently through automation. It does not make errors in the same way that people do.

It also allows humans to concentrate their ingenuity on exploratory testing, which machines cannot accomplish. You can deploy new features with a reduced time-to-market thanks to automation.

Maintenance of the Regression Test Suite

Now, let’s complete the cycle by ensuring that the corresponding test cases (manual and automated) are also modified immediately with every modification and change request to any existing part of the application. These modified test cases should now be part of the regression suite.

Failing to adjust the test cases would create chaos in the teams involved. The circus might result in incorrect testing of the underlying application and introduce unintended features and rollbacks. 

Maintaining the regression test suite consists of adding new tests, modifying existing tests, and deleting irrelevant tests. These changes should be reflected in the manual and automated test suites.

Regression Testing Tools

There aren’t separate testing tools categorised as “regression testing tools.” The teams use the same testing tools; however, many test automation tools are utilised to automate the regression test suite. 

Depending on the project type, the following regression testing tools may be used in a combination of the above techniques mentioned in the previous section:

API Heavy Applications

APIs are the foundation of modern software development, especially as more and more teams abandon monolithic programmes in favour of a microservices-based strategy.

  • Contract-driven testing is gaining popularity, and rightly so because it avoids regression issues being committed to the repository in the first place as opposed to identifying them later in the process during the testing phase. Understand more about pacts/contracts here
  • Specmatic is an excellent open-source solution that uses the contracts available in OpenAPI spec format, and turns them into executable specifications which can be used by the provider and consumer in parallel. It also allows you to check the contract for backward compatibility via CI.
  • Testing APIs is comparatively faster than verifying the functionality of the user interface. Hence, for faster & accurate feedback flowing across the groups, adopt automated API & API Workflow testing using open-source solutions like REST-Assured, Postman, etc.   
Logos for Postman, Specmatic, Pact and Rest-Assured

UI Heavy Applications

UI accuracy is unquestionably vital for a successful business because it directly impacts end users.

Even when utilizing the most extraordinary development processes and frontend technology, testing the UI is one of the most significant bottlenecks in a release.

Applitools is a pioneer in AI-powered automated visual regression testing. Their solution allows you to integrate Visual Testing with functional and regression UI automation and in turn get increased test coverage, quick feedback, and seamless scaling by using the Applitools Ultrafast Grid – all while writing less code. You can try out their solutions by signing up for a free account and going through the tutorials available here.

Applitools is the leader in Visual Testing

Support & Maintenance Portfolio

Teams responsible for testing legacy applications often experience the need to explore the application before blindly getting started with the regression test suite.

Utilizing the results from your exploratory testing sessions to populate and validate your impact analysis documents and RTMs proves beneficial in making necessary modifications to the regression test suite.

Exploratory testing tools are incredibly valuable and can assist you in achieving your goal for the session, whether it’s to explore a component of the app, detect flaws, or determine the relationship between features.

Non-Functional Testing

Each of the following topics is a specialised field in and of itself, and it is impossible to cover them all in one blog post. This list, on the other hand, will undoubtedly get you thinking in that direction.

Performance Testing

  • Performance issues can occur at any tier of the software stack, including the operating system, network, disc, web, application, and database layer.
  • Open source regression testing tools such as Apache JMeter, Gatling, Locust, Taurus, and others help detect performance issues such as concurrency, throughput, peak load, performance bottlenecks, and so on throughout the software stack.
  • Application performance monitoring (APM) tools are also used by development teams to link coding practises to app performance throughout development.

Security Testing

  • Zed Attack Proxy (ZAP), Wfuzz, Wapiti, W3af, SQLMap, SonarQube, Nogotofail, Iron Wasp, Grabber, and Arachni are open source security testing tools that help with assessing security conditions such as Authentication, Authorization, Availability, Confidentiality, Integrity, and Non-repudiation. 
  • To reap the benefits of both methodologies, organisations combine static application security testing (SAST) with dynamic application security testing (DAST).

Accessibility Testing

  1. Use Applitools Contrast Advisor to identify contrast violations as part of your test automation execution. This solution works for native Android apps, native iOS apps, all Web Browsers including mobile-web, PDF documents and images.
  2. Screen readers – VoiceOver, NVDA, JAWS, Talkback, etc.
  3. WAT (Web accessibility toolbar) – WAVE, Colour Contrast Analyser, etc.

Summary

A well-thought-out regression testing plan will aid your team in achieving your QA and software development goals, whether the architecture is monolithic or microservices-based, and whether the application is new or old. You can learn about how Applitools can help with functional and visual regression testing here.

Editor’s Note: This post was originally published in January 2022, and has since been updated for completeness and accuracy.


Quick Answers

What is regression testing and why does it matter?

Regression testing verifies that recent code changes didn’t break existing functionality, protecting core flows and user experience release after release.

Which types of regression tests are most effective?

Mix smoke tests for critical path validation, targeted tests for changed areas, and deeper end-to-end flows for cross-feature risk; tune scope by risk and release cadence (https://app14743.cloudwayssites.com/tutorials/).

How does visual testing improve regression coverage?

Visual testing catches UI defects (layout shifts, styling issues, missing assets) that code-level assertions miss by validating pixels with Visual AI (https://app14743.cloudwayssites.com/platform/validate/visual-ai/).

How do we keep multi-browser regression fast?

Run visual checkpoints in parallel across browsers/devices with the Ultrafast Grid (https://app14743.cloudwayssites.com/ultrafast-grid) and trigger only the suites relevant to the changed code paths.

The post What is Regression Testing? Definition, Tutorial & Examples appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io https://app14743.cloudwayssites.com/blog/applitools-testim-io-codeless-end-to-end-ai-powered-cross-browser-ui-testing/ Fri, 18 Feb 2022 17:27:57 +0000 https://app14743.cloudwayssites.com/?p=34425 The newly enhanced integration makes it easier for all testers to use Applitools and our AI-powered visual testing platform with Testim.io.

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

As a product manager at Applitools I am excited to announce an enriched and updated integration with Testim.io! This enhanced integration makes it easier for testers of any technical ability to use Applitools and our AI-powered visual testing platform by using Testim.io to easily create your test scripts.

What Is Testim.io Used For?

Testim.io is a cloud platform that allows users to create, execute, and maintain automated tests without using code.

It is a perfect tool for getting started with your first automated tests, if you do not have an existing automated testing framework or if you have not started to run tests yet. Testim.io allows you to integrate your own custom code into their steps so you can implement custom validations if you need to.

How Do Applitools and Testim.io Integrate?

The visual validation empowered by Applitools Eyes allows you to compare the visual differences between expected results (baseline) with actual results after creating the tests in Testim.io. By using Visual AI to compare snapshots Applitools Eyes can spot any unexpected changes and highlight them visually. This lets you expand your test coverage to include everything on a given page as well as visually verify your results quickly.

As part of the integration, you can modify test parameters to customize Eyes while working with the Testim UI.

This AI-based visual validation functionality is provided by Applitools and requires simple integration setup in the Eyes application. Learn more.

So, What’s New With Applitools and Testim.io?

This up-to-date integration provides access to Applitools’ latest and greatest capabilities, including Ultrafast Test Cloud, enabling ultrafast cross-browser and cross-platform functional and visual testing. Testim users also now have access to Root Cause Analysis and many more powerful Applitools features!

The new integration also greatly improves on the user experience of test creators adding Applitools Eyes checkpoints to their Testim.io tests. Visual validations can be added right inside Testim and the maintenance and analysis of test results is much simpler.

What Kind of Visual Validations Can You Do?

You can perform the following visual validations:

  • Element Visualization – The Validate Element visualization step allows you to compare visual differences of a specific element between your baseline and your current test run.
  • Viewport Visualization – The Validate Viewport allows you to compare the visual difference between your baseline and the current test run of your viewport.
  • Full-page Visualization – Full-page validation allows you to compare the visual differences between your baseline and your current test run of your entire page.

What Are the New Visual Validation Settings?

Whether you select the element, viewport, or full-page visualization option you can always override the visual setting for that test or step.

The following Applitools Eyes settings can be accessed via the Testim.io UI:

  • Add Environment (New) – allows you to select Ultrafast Test Cloud environments. You can select the same test to run on multiple environments: different browser types and viewports for web, Chrome emulation, or iOS simulation for mobile devices. Using Applitools Ultrafast Test Cloud you can now increase your coverage and accelerate your release cycles.
  • Match Level – When writing a visual test, sometimes we will want to change the comparison method between our test and its baseline, especially when dealing with applications that consist of dynamic content. Here you can update the Applitools Eyes match level directly from Testim UI.
  • Enable RCA [Root Cause Analysis] (New) – when this flag is on it will provide insights into the causes of visual mismatches so that when looking at the Eyes dashboard you will be able to see the DOM and CSS that generated with the image.
  • Ignore displacement (New) – when this flag is on it will hide differences caused by element displacements. This feature is useful, for example, where content is added or deleted, causing other elements on the page to be displaced and generating additional differences. 

User Experience Improvements

In addition to exposing new features in the Testim UI, we have provided better visibility to Testim tests in Applitools Eyes:

  • Testim test properties are passed to the Eyes Dashboard to allow better filtering and grouping with all Testim tests properties.
  • Testim multi-step and test suites are now also grouped on the Applitools Eyes dashboard and are displayed as one batch to create a better user experience when moving between the two products.
  • Testim Selenium and extension modes are supported.

Complete and Scalable AI-Powered UI Testing

Testim.io allows users to quickly create and maintain tests through record and playback. Adding Applitools visual testing with Ultrafast Test Cloud capabilities will make sure your release cycles are short and test analysis and maintenance are easier than ever!

Learn More about Testim.io-Applitools Integration

If you want to learn more about how you can integrate your codeless Testim tests with Applitools and benefit from the latest Applitools capabilities, head over to Testim.io documentation.

Contact us if you have any queries about Applitools!

Happy testing!

The post Codeless End-to-End AI-Powered Cross Browser UI Testing with Applitools and Testim.io appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Visual AI? https://app14743.cloudwayssites.com/blog/visual-ai/ https://app14743.cloudwayssites.com/blog/visual-ai/#respond Wed, 29 Dec 2021 14:27:00 +0000 https://app14743.cloudwayssites.com/?p=33518 Learn what Visual AI is, how it’s applied today, and why it’s critical across many industries - in particular software development and testing.

The post What is Visual AI? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In this guide, we’ll explore Visual Artificial Intelligence (AI) and what it means. Read on to learn what Visual AI is, how it’s being applied today, and why it’s critical across a range of industries – and in particular for software development and testing.

From the moment we open our eyes, humans are highly visual creatures. The visual data we process today increasingly comes in digital form. Whether via a desktop, a laptop, or a smartphone, most people and businesses rely on having an incredible amount of computing power available to them and the ability to display any of millions of applications that are easy to use.

The modern digital world we live in, with so much visual data to process, would not be possible without Artificial Intelligence to help us. Visual AI is the ability for computer vision to see images in the same way a human would. As digital media becomes more and more visual, the power of AI to help us understand and process images at a massive scale has become increasingly critical.

What is AI? Background on Artificial Intelligence and Machine Learning

Artificial Intelligence refers to a computer or machine that can understand its environment and make choices to maximize its chance of achieving a goal. As a concept, AI has been with us for a long time, with our modern understanding informed by stories such as Mary Shelley’s Frankenstein and the science fiction writers of the early 20th century. Many of the modern mathematical underpinnings of AI were advanced by English mathematician Alan Turing over 70 years ago.

Image of Frankenstein

Since Turing’s day, our understanding of AI has improved. However, even more crucially, the computational power available to the world has skyrocketed. AI is able to easily handle tasks today that were once only theoretical, including natural language processing (NLP), optical character recognition (OCR), and computer vision.

What is Visual Artificial Intelligence (Visual AI)?

Visual AI is the application of Artificial Intelligence to what humans see, meaning that it enables a computer to understand what is visible and make choices based on this visual understanding.

In other words, Visual AI lets computers see the world just as a human does, and make decisions and recommendations accordingly. It essentially gives software a pair of eyes and the ability to perceive the world with them.

As an example, seeing “just as a human does” means going beyond simply comparing the digital pixels in two images. This “pixel comparison” kind of analysis frequently uncovers slight “differences” that are in fact invisible – and often of no interest – to a genuine human observer. Visual AI is smart enough to understand how and when what it perceives is relevant for humans, and to make decisions accordingly.

Representation of Visual AI

How is Visual AI Used Today?

Visual AI is already in widespread use today, and has the potential to dramatically impact a number of markets and industries. If you’ve ever logged into your phone with Apple’s Face ID, let Google Photos automatically label your pictures, or bought a candy bar at a cashierless store like Amazon Go, you’ve engaged with Visual AI. 

Technologies like self-driving cars, medical image analysis, advanced image editing capabilities (from Photoshop tools to TikTok filters) and visual testing of software to prevent bugs are all enabled by advances in Visual AI.

How Does Visual AI Help?

One of the most powerful use cases for AI today is to complete tasks that would be repetitive or mundane for humans to do. Humans are prone to miss small details when working on repetitive tasks, whereas AI can repeatedly spot even minute changes or issues without loss of accuracy. Any issues found can then either be handled by the AI, or flagged and sent to a human for evaluation if necessary. This has the dual benefit of improving the efficiency of simple tasks and freeing up humans for more complex or creative goals.

Visual AI, then, can help humans with visual inspection of images. While there are many potential applications of Visual AI, the ability to automatically spot changes or issues without human intervention is significant. 

Cameras at Amazon Go can watch a vegetable shelf and understand both the type and the quantity of items taken by a customer. When monitoring a production line for defects, Visual AI can not only spot potential defects but understand whether they are dangerous or trivial. Similarly, Visual AI can observe the user interface of software applications to not only notice when changes are made in a frequently updated application, but also to understand when they will negatively impact the customer experience.

How Does Visual AI Help in Software Development and Testing Today?

Traditional testing methods for software testing often require a lot of manual testing. Even at organizations with sophisticated automated testing practices, validating the complete digital experience – requiring functional testing, visual testing and cross browser testing – has long been difficult to achieve with automation. 

Without an effective way to validate the whole page, Automation Engineers are stuck writing cumbersome locators and complicated assertions for every element under test. Even after that’s done, Quality Engineers and other software testers must spend a lot of time squinting at their screens, trying to ensure that no bugs were introduced in the latest release. This has to be done for every platform, every browser, and sometimes every single device their customers use. 

At the same time, software development is growing more complex. Applications have more pages to evaluate and increasingly faster – even continuous – releases that need testing. This can result in tens or even hundreds of thousands of potential screens to test (see below). Traditional testing, which scales linearly with the resources allocated to it, simply cannot scale to meet this demand. Organizations relying on traditional methods are forced to either slow down releases or reduce their test coverage.

A table showing the number of screens in production by modern organizations - 81,480 is the market average, and the top 30% of the market is  681,296
Source: The 2019 State of Automated Visual Testing

At Applitools, we believe AI can transform the way software is developed and tested today. That’s why we invented Visual AI for software testing. We’ve trained our AI on over a billion images and use numerous machine learning and AI algorithms to deliver 99.9999% accuracy. Using our Visual AI, you can achieve automated testing that scales with you, no matter how many pages or browsers you need to test. 

That means Automation Engineers can quickly take snapshots that Visual AI can analyze rather than writing endless assertions. It means manual testers will only need to evaluate the issues Visual AI presents to them rather than hunt down every edge and corner case. Most importantly, it means organizations can release better quality software far faster than they could without it.

Visual AI is 5.8x faster, 5.9x more efficient, 3.8x more stable, and catches 45% more bugs
Source: The Impact of Visual AI on Test Automation Report

How Visual AI Enables Cross Browser/Cross Device Testing

Additionally, due to the high level of accuracy, and efficient validation of the entire screen, Visual AI opens the door to simplifying and accelerating the challenges of cross browser and cross device testing. Leveraging an approach for ‘rendering’ rather than ‘executing’ across all the device/browser combinations, teams can get test results 18.2x faster using the Applitools Ultrafast Test Cloud than traditional execution grids or device farms.

Traditional test cycle takes 29.2 hours, modern test cycle takes just 1.6 hours.
Source: Modern Cross Browser Testing Through Visual AI Report

How Will Visual AI Advance in the Future?

As computing power increases and algorithms are refined, the impact of Artificial Intelligence, and Visual AI in particular, will only continue to grow.

In the world of software testing, we’re excited to use Visual AI to move past simply improving automated testing – we are paving the way towards autonomous testing. For this vision (no pun intended), we have been repeatedly recognized as an industry leader by the industry and our customers.

Keep Reading: More about Visual AI and Visual Testing

What is Visual Testing (blog)

The Path to Autonomous Testing (video)

What is Applitools Visual AI (learn)

Why Visual AI Beats Pixel and DOM Diffs for Web App Testing (article)

How AI Can Help Address Modern Software Testing (blog)

The Impact of Visual AI on Test Automation (report)

How Visual AI Accelerates Release Velocity (blog)

Modern Functional Test Automation Through Visual AI (free course)

Computer Vision defined (Wikipedia)

The post What is Visual AI? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/visual-ai/feed/ 0