Test Engineers Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/tag/test-engineers/ Applitools delivers full end-to-end test automation with AI infused at every step. Thu, 19 Mar 2026 20:19:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies https://app14743.cloudwayssites.com/blog/playwright-visual-testing-strategy/ Thu, 19 Mar 2026 20:19:13 +0000 https://app14743.cloudwayssites.com/?p=62370 Visual testing in Playwright often forces teams to choose between strict failures, snapshot maintenance, and CI pipeline complexity. This article explores how a single configuration flag introduces three different strategies for handling visual differences and improving the Playwright developer experience.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Hello everyone! I’m Noam, an SDK developer on the Applitools JS-SDKs team. While my day-to-day focus is on core engineering, I work closely with our field teams and occasionally join technical deep-dive sessions with customers.

In these conversations, we frequently encounter questions about performance and the engineering philosophy behind our integration. Specifically, there is often curiosity about how to make visual testing feel more “Playwright-native” and natural to developers.

In this post, I’ll share the design logic behind these architectural choices so you can apply these patterns in your own CI pipelines in a way that fits your organization’s needs.

Adding unresolved to Playwright

Integrating visual regression testing into Playwright requires combining two different status models: Playwright’s binary Pass/Fail and the visual testing concept of unresolved.

In visual testing, instead of having two (passed and failed) states, there’s an additional third state: unresolved. This state indicates a difference was detected, but a human decision is required to determine if it is a bug or a valid change that should be approved as a new baseline.

​Playwright doesn’t support this third state out of the box. Visual test maintenance using Playwright’s native toHaveScreenshot API forces the developer into a cumbersome cycle requiring three separate test executions:

  1. First, the developer needs to run to see the failure.
  2. Then, they need to run with the --update-snapshots flag to create new baseline images.
  3. Then, most developers would run again to validate that everything works with the updated baseline as expected—which isn’t always the case, because the Playwright native comparison method (pixelmatch) tends to be very flaky, unlike Visual AI.

​After this local cycle, the developer must commit the new baseline images to the repository—bloating the git history—and wait for a new CI execution to provide final feedback. For dev-centered organizations that focus on feedback loop velocity, this workflow is… suboptimal. Personally, I believe that’s one of the reasons visual testing isn’t as popular as it should be among Playwright users.

​When we engineered the Applitools fixture, one of our goals was to support this Unresolved state natively, without disrupting Playwright’s core lifecycle—specifically its Worker Processes and Retry mechanisms.

The solution rests on two key engineering decisions: moving rendering to the background (async architecture) and giving developers control over the exit signal and performance tradeoffs (failTestsOnDiff).

We don’t block test execution when Applitools is rendering

The core value of visual testing lies in AI-based comparison to eliminate false positives and multi-platform rendering.

Architecturally, these processes are cloud-native services.

  • AI-as-a-Service: Just like massive LLMs or other generative models, the Visual AI engine runs on specialized cloud infrastructure optimized for heavy inference. It cannot simply be “installed” on a lightweight CI agent.
  • Platform Constraints: Authentic cross-platform rendering (e.g., iOS Safari on a Linux CI agent) is physically impossible on a single local machine.

Since these operations inherently occur remotely, performing them synchronously would force the local test runner to idle while waiting for network round-trips and cloud processing.

To solve this, we designed the fixture around an asynchronous architecture:

  • Instant Capture: When eyes.check() is called, we synchronously capture the DOM and CSS resources (instead of a rasterized image). This operation is extremely fast.
  • Immediate Release: We purposefully use soft assertions by design. We release the Playwright test thread immediately so the functional logic can proceed to the next step or test case without blocking.
  • Background Heavy Lifting: The heavy work—uploading assets, rendering across different browsers and operating systems, and performing the AI comparison in the Applitools cloud—starts immediately in the background, managed by the Worker process.

The “Draining Queue” Effect

​This architecture explains why the Playwright Worker sometimes remains active after the final test completes.

​The background tasks are limited only by your account’s concurrency settings, and the screenshot size. For example, when rendering a 10,000 px page on a small mobile device, the rendering infrastructure might need time for scrolling and stitching. If your functional tests execute faster than the background workers can process the queue (rendering & comparing), the Worker process stays alive at the end solely to “drain the queue” and ensure data integrity.

While it does ensure your test logic runs at maximum speed, offloading the processing cost to the background, this experience might cause friction and frustration as the developers see that workers are “hanging” after tests are completed. When facing such issues, our support team is here to advise and assist with various solutions—we can investigate execution logs and if needed even make custom suggestions to tailor Eyes-Playwright to your needs.

Solving the Matrix Problem

​Standard Playwright documentation recommends defining multiple projects in playwright.config.ts to cover different browsers (Chromium, Firefox, WebKit) and various viewport sizes.

​While this ensures coverage, it introduces a linear performance penalty (O(N)). To test three browsers across two viewports, your CI must execute the functional logic (clicks, waits, navigation) six times. It’s 6x more load on the CI machine and the testing environment.

​We recommend shifting this workload to the Ultrafast Grid (UFG).

​In this mode, you execute the Playwright test once, typically on Chromium. We upload the DOM state, and our cloud infrastructure renders that state across all configured browsers and viewports in parallel.

This transforms an O(N) execution problem into an O(1) execution problem, significantly shortening the feedback loop.

The Strategy: failTestsOnDiff

​Since the actual comparison happens asynchronously and potentially completes after the test logic finishes, we need a mechanism to map the visual result back to the Playwright status.

​This is controlled by the failTestsOnDiff flag. It’s not just a boolean; it’s a strategic choice for your CI pipeline.

  • The Logic: This is the configuration our own Front-End team uses. We believe that Visual Change Test Failure.
  • Behavior: The Playwright test passes (Green). The unresolved status is reported externally via our SCM integration (GitHub/GitLab).
  • Why: Retrying a visual test is computationally wasteful—the pixels won’t change on the second run. By keeping the test “Green,” we avoid triggering Playwright’s retry mechanism. The decision is moved to the Pull Request, where it belongs.

Read more about SCM integration or hop directly to our GitHub, Bitbucket, Gitlab or Azure Devops articles.

  • The Logic: You need a “Red” pipeline to block deployment, but you want to avoid the noise of retries and gain a significant performance improvement.
  • Behavior: Individual tests pass, but the Worker Process exits with a failure code if any diffs were found in the suite.
  • Why: This provides a hard gatekeeper for the build status. It allows the Eyes rendering farms to continue processing visual test results in the background without blocking the execution thread, allowing the worker to move on to handle more tests efficiently.
  • The Logic: Immediate feedback loop.
  • Behavior: Fails the test immediately in the afterEach hook.
  • Why: Best for local development where you want to see the failure immediately in the console. It is also useful if you use the trace: retainOnFailure setting in Playwright, as it ensures traces are preserved for unresolved visual assertions. Not recommended for CI due to the retry loops described above.

TL;DR – When to use each setting

Mode afterEach afterAll false
Performance Less performant
The Playwright worker will wait after each test for all renders to be completed and for the visual AI to compare the results
Best performance
The Playwright workers will collect the resources and manage the rendering and Visual AI comparisons in the background
Best performance
Similar to afterAll
Observability Best
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as failing
Good
Applitools reporter will show all statuses correctly, other reporters will consider unresolved tests as passing. You will get a failure of the worker process, and other reporters won’t link it to a specific test case.
Great in pull request (If SCM integration is enabled).
The Applitools reporter will reflect the tests perfectly. Other reporters will consider unresolved tests as passing.
Best fit Local testing Local testing AND
CI environments without SCM integration
CI environments with SCM integration

Closing the Visibility Gap: The Custom Reporter

​If you adopt Strategy A (false) or Strategy B (afterAll), you introduce a secondary challenge: Visibility.

Since Playwright technically marks these tests as Passed to avoid retries, the standard Playwright HTML Report will show them as “Green,” potentially masking unresolved visual differences that require attention.

​To bridge this gap without forcing developers to switch context, we developed a Custom Applitools Reporter.

​This reporter extends the standard Playwright HTML report. It injects the actual visual status (Passed, Failed, or unresolved) directly into the test results view.

  • True Status: You see which tests have visual diffs, even if the Playwright exit code was successful.
  • Direct Links: It provides a direct link from the test report to the specific batch results in the Applitools Dashboard.
  • Context: It enriches the report with UFG render status and batch information.

​This ensures you get the best of both worlds: The optimization of a “Green” CI run (no retries), with the transparency of a report that highlights exactly where manual review is needed.

Summary

​The Applitools Playwright fixture is designed to be non-blocking and scalable. By leveraging asynchronous architecture and Applitools UltraFast Grid, we offload the heavy lifting from your CI. By correctly configuring failTestsOnDiff, you ensure that your pipeline reflects your team’s engineering culture—whether that’s strict gating or modern, PR-based visual review.

Quick Answers

What is visual regression testing in Playwright

Visual regression testing in Playwright verifies that changes to an application’s UI do not introduce unintended visual differences. Playwright can perform basic visual regression checks using screenshot comparisons like toHaveScreenshot, while dedicated visual testing tools (such as Applitools Eyes) extend this by detecting meaningful UI changes, managing baselines, and enabling review workflows for approving visual updates.

What is the best way to do visual testing in Playwright?

Playwright supports basic visual testing through screenshot comparisons such as toHaveScreenshot, but this approach can become difficult to maintain at scale. Dedicated visual testing tools, like Applitools Eyes, extend Playwright by adding Visual AI comparison, cross-browser rendering, and review workflows that allow teams to detect visual regressions without maintaining large sets of screenshot baselines.

How does Playwright screenshot testing (toHaveScreenshot) compare to visual regression testing tools?

Playwright’s toHaveScreenshot performs pixel-by-pixel image comparisons against stored baseline images. While this works for simple cases, it often requires updating and maintaining many snapshots. Visual regression testing tools like Applitools Eyes use Visual AI to detect meaningful UI changes while ignoring insignificant rendering differences, provide review workflows to approve or reject visual changes, and allows custom match levels for different regions of the screen.

Can Playwright run visual tests across multiple browsers and devices

Yes, but with a limited scope. Natively, Playwright supports three browser engines (Chromium, Firefox, and WebKit), but it does not execute tests across different real operating systems or mobile devices. This lack of OS-level rendering limits coverage and imposes a risk of missing platform-specific visual bugs. For example, see how a frontend team caught a visual bug specific to Mac Retina screens that a standard engine check would miss.

How can you run cross-browser visual tests in Playwright without running tests multiple times?

Normally, cross-browser testing requires executing the same tests separately for each browser configuration. Tools like Applitools Ultrafast Grid allow tests to run once while visual rendering is executed across multiple browsers and viewport combinations in parallel. This removes the need to multiply test execution across the full browser matrix.

Why is cross-browser testing in Playwright so slow?

Natively, cross-browser testing introduces a significant performance penalty. Playwright must execute the entire test logic (clicks, waits, network requests) separately for every browser and viewport configuration. Modern visual testing tools (e.g., Applitools Ultrafast Grid) eliminate this overhead by executing the test logic just once locally, performing the cross-browser rendering and visual comparison in parallel in the cloud.

The post Engineering a Playwright-Native Developer Experience: One Flag, Three Strategies appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What Test Execution Demands That Generative AI Can’t Guarantee https://app14743.cloudwayssites.com/blog/test-execution-generative-ai/ Thu, 26 Feb 2026 19:39:00 +0000 https://app14743.cloudwayssites.com/?p=62288 Generative AI excels at creating tests—but execution demands repeatability and trust. Learn why deterministic approaches matter for reliable test automation.

The post What Test Execution Demands That Generative AI Can’t Guarantee appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

TL;DR

• Generative AI is highly effective for creating tests, data, and analysis, but execution has different requirements.
• Test execution demands repeatability, determinism, and explainable failures.
• Probabilistic systems, including LLMs, introduce variability that leads to flaky tests and loss of trust.
• Teams that separate where generative AI helps from where deterministic execution is required scale testing more reliably.

Generative AI has dramatically changed how teams create tests. Requirements can be translated into test cases in seconds. Automation scripts can be bootstrapped with natural language. Test data can be generated on demand.

But many teams are discovering an uncomfortable truth: faster test creation does not automatically lead to more reliable releases.

Execution is where confidence is earned or lost. And test execution demands guarantees that generative AI—including large language models (LLMs)—was never designed to provide.

Where generative AI fits well in testing

Generative AI excels in parts of the testing lifecycle that tolerate variation. These are areas where approximation is acceptable and speed matters more than precision.

Teams are successfully using AI to:

  • Generate test cases from requirements
  • Assist with unit and integration test authoring
  • Create realistic and varied test data
  • Summarize test results and surface patterns

In most of these cases, teams are relying on LLMs to generate intent, not to make final execution or release decisions.

These use cases benefit from flexibility. Minor differences in output rarely introduce risk, and human review is often part of the workflow.

The challenge emerges when that same probabilistic behavior is extended into execution.

Why test execution is fundamentally different

Test execution is not a creative task. It is a verification task.

Execution requires:

  • The same test to behave the same way, run after run
  • Assertions that are precise and stable
  • Failures that can be reproduced and diagnosed
  • Outcomes that can be explained clearly to stakeholders

Generative AI systems—particularly LLMs—are probabilistic by design. That variability is useful for exploration and generation, but it works against the repeatability and determinism execution depends on.

As AI accelerates development, repeatability becomes more important than intelligence in test execution.

How probabilistic execution creates real problems

When probabilistic systems are used to drive execution, teams often encounter the same failure modes:

  • Tests that pass one run and fail the next without code changes
  • Assertions that subtly change or disappear
  • Longer debugging cycles because failures can’t be reproduced
  • Rising compute costs from repeated executions
  • Engineers losing confidence in automation results

When failures aren’t repeatable, teams stop trusting their tests—and that’s when automation becomes a bottleneck instead of a benefit.

– Shaping Your 2026 Testing Strategy

Once trust erodes, teams compensate. Manual validation creeps back in. Releases slow down. Automation becomes something teams work around rather than rely on.

Execution amplifies risk: security, governance, and explainability

Execution is also where risk concentrates.

When AI systems drive test execution, they may:

  • Send application context externally
  • Make decisions that can’t be fully explained
  • Produce outcomes that are difficult to audit

These concerns are most visible in regulated and high-risk environments, but they apply broadly. Any team responsible for production releases needs to be able to explain why a test failed—or why a release was approved.

Reliable execution is not just a technical concern. It’s a governance concern.

Why deterministic execution matters at scale

Deterministic systems behave predictably. Given the same inputs, they produce the same outcomes.

In test execution, this enables:

  • Reliable failure reproduction
  • Faster root cause analysis
  • Lower maintenance overhead
  • Clear audit trails
  • Reduced noise in pipelines

What test execution demands is not intelligence, but guarantees: the same inputs producing the same outcomes, every time.

Reliable test execution depends on determinism, not creativity.

Rethinking AI’s role in execution

The goal is not to abandon generative AI. It’s to use it where it fits.

Effective teams are separating responsibilities:

  • Generative AI for creation, exploration, and analysis
  • Deterministic systems for execution and verification

This separation allows teams to move quickly without sacrificing confidence.

What this means for engineering and QE teams

As AI becomes more deeply embedded in testing workflows, the key decision is no longer whether to use AI—but where.

Teams that succeed will:

  • Accept variability where it’s safe
  • Demand determinism where decisions are made
  • Measure success by signal quality, not test count
  • Optimize for trust before speed

The biggest risk in AI-driven testing isn’t lack of automation—it’s lack of trust.

Choosing confidence over convenience

Generative AI has changed how tests are created. It should not change the standards by which tests are trusted.

Execution is where reliability matters most. Teams that recognize this distinction will scale testing with confidence, even as AI continues to reshape software development.

Watch Shaping Your 2026 Testing Strategy now.


Quick Answers

Why can’t generative AI reliably execute tests?

Generative AI systems, including LLMs, are probabilistic by design. This variability leads to inconsistent execution flows, unstable assertions, and failures that are difficult to reproduce.

Is generative AI bad for test automation?

No. Generative AI is highly effective for test creation, data generation, and analysis. Problems arise when it is used to drive execution and release decisions.

What does deterministic test execution mean?

Deterministic test execution produces consistent results given the same inputs, enabling repeatable failures, faster debugging, and greater trust in automation.

Why does execution matter more than test creation?

Test creation accelerates coverage, but execution determines confidence. Reliable releases depend on predictable, explainable test outcomes.

How should teams combine generative AI and LLMs with deterministic systems?

Use generative AI and LLMs where flexibility is helpful, and deterministic systems where verification and decision-making require guarantees.

The post What Test Execution Demands That Generative AI Can’t Guarantee appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Test Maintenance at Scale: How Visual AI Cuts Review Time and Flakiness https://app14743.cloudwayssites.com/blog/test-maintenance-at-scale-visual-ai/ Tue, 21 Oct 2025 20:22:00 +0000 https://app14743.cloudwayssites.com/?p=61615 Reduce flakiness, speed up reviews, and see how teams like Peloton cut maintenance time by 78% using Visual AI.

The post Test Maintenance at Scale: How Visual AI Cuts Review Time and Flakiness appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
smarter test maintenance at scale

Why Test Maintenance Breaks at Scale

Test maintenance at scale slows releases. Teams that rely on coded assertions spend more time updating tests than improving coverage. Brittle locators, environment drift, and false positives all add up—turning automation into a maintenance cycle.

Neglecting maintenance is like skipping car care: small issues snowball into costly downtime. A smarter approach replaces manual review and locator-based scripts with automated, visual validation that adapts as your UI evolves.

How Visual AI Delivers Test Maintenance at Scale

Visual AI replaces dozens of coded assertions with a single checkpoint that mimics how humans see. It validates full UI states, detecting layout shifts, missing elements, and text overlaps automatically.

By consolidating validations into one Visual AI check, teams cut review time, reduce false positives, and gain faster feedback cycles.

Scale Reviews with Ultrafast Grid and Grouping

Running tests one browser at a time no longer scales. The Applitools Ultrafast Grid executes a single test once, then validates results across every browser and device combination in parallel.

Batching and grouping features make reviews equally efficient—approve or reject similar changes across entire runs in just a few clicks.

How it works

  • Replace assertions with one visual checkpoint
  • Run once across all browsers and devices
  • Batch results for unified review
  • Approve or reject in bulk
  • Tune match levels for dynamic content

Together, these capabilities eliminate redundant effort and make large-scale testing faster to maintain.

Customer Results: 78% Less Maintenance

Teams that adopt this approach see measurable ROI. At Peloton, replacing a legacy visual testing tool with Applitools Visual AI produced a 78% reduction in maintenance time and saved about 130 hours per month.

With dynamic leaderboards, live data, and responsive layouts across web and native mobile, Peloton maintains quality at scale without expanding test overhead.

Three Features That Change Maintenance

Ultrafast Grid, Visual AI match levels, and bulk grouping—those three change the game.”

Mike Millgate, Smarter Test Maintenance at Scale

These three deliver flexible validation, fast execution, and effortless maintenance. Each removes manual steps and accelerates the feedback loop that keeps releases reliable.

Smarter Maintenance for Modern Teams

Smarter test maintenance isn’t about writing more code—it’s about automating intelligently. Visual AI reduces flakiness, speeds reviews, and scales across devices and environments.

To see what’s next, explore Applitools Eyes 10.22, featuring faster review cycles, new Storybook and Figma integrations, and even shorter feedback loops for test maintenance at scale.

Frequently Asked Questions

What is Visual AI testing?

Visual AI uses automated visual assertions to validate full UI states, catching layout and content changes that code-heavy checks miss.

How does Visual AI reduce test maintenance at scale?

One visual checkpoint replaces dozens of brittle assertions, while batching and grouping speed reviews across browsers and devices.

What’s the difference between Visual AI and visual regression testing?

Visual AI applies learned match levels and region logic to reduce false positives and handle dynamic content; classic visual diffing is more brittle. Learn more about Visual AI.

How do match levels help with dynamic content?

Layout, text, and color match levels tune sensitivity so teams can ignore cosmetic shifts while catching meaningful UI regressions.

Does Visual AI work with my framework (Selenium, Cypress, Playwright)?

Yes—Applitools has drop-in SDKs let you run your existing tests and add a single Visual AI checkpoint. Learn how to quickly integrate Applitools into your current tech stack.

The post Test Maintenance at Scale: How Visual AI Cuts Review Time and Flakiness appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
AI Test Automation Platform for Developers: Why Applitools Won in 2025 https://app14743.cloudwayssites.com/blog/ai-test-automation-platform-developer-perspective/ Tue, 17 Jun 2025 12:48:15 +0000 https://app14743.cloudwayssites.com/?p=60781 Applitools was named 2025 AI Test Automation Platform of the Year—not for hype, but for helping developers scale testing with Visual AI and real engineering speed.

The post AI Test Automation Platform for Developers: Why Applitools Won in 2025 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
AI Test Automation

Applitools was named CIO Review’s 2025 AI-Powered Test Automation Platform of the Year—not because we chased buzzwords, but because the platform is fundamentally AI-native, built for engineering scale, and designed to help developers test smarter without slowing down.

For developers, testers, and QA engineers, the award reflects what actually matters:

  • Reducing test flakiness
  • Automating visual and functional checks in parallel
  • Scaling test execution across browsers and devices
  • Plugging into CI/CD pipelines without disrupting existing workflows

Let’s break down what makes this platform stand out from a developer’s perspective.

AI-Native Testing, Not Bolt-On AI

Applitools isn’t a traditional test framework with AI sprinkled on top. It’s purpose-built to use Visual AI plus code-aware intelligence for smarter test coverage. That means:

  • You can catch regressions that DOM diffs would miss
  • You write fewer assertions, yet spot more visual and layout issues
  • You reduce false positives and test flakiness—without relying on brittle selectors

It’s AI-native automation that understands what the user sees, not just what the code renders.

Built for Real Engineering Workflows

Applitools supports every major language and framework, including:

  • Languages: JavaScript, TypeScript, Java, Python, C#, Ruby
  • Frameworks: Cypress, Playwright, Selenium, WebdriverIO, and more
  • Mobile: Appium and native frameworks

You don’t need to rip and replace. Applitools plugs directly into your current test suite with minimal setup and no test rewrites required.

Ultrafast Grid = Multi-Platform Testing Without the Bottlenecks

You run your tests once. Applitools executes them across dozens of browser, OS, and device combinations in parallel—via the Ultrafast Grid, not your CI or local machine.

That means:

  • Fast, scalable cross-browser coverage
  • Smart DOM diffing combined with Visual AI
  • Consistent UX testing across breakpoints and devices

No emulators. No stitched screenshots. Just reliable results, fast.

Seamless CI/CD Integration

Applitools fits natively into DevOps workflows with:

  • GitHub Actions, GitLab, Jenkins, CircleCI, Azure Pipelines, Bitbucket, TeamCity
  • Rich CLI tooling for custom pipelines
  • Git-based test baselines and approval workflows
  • Smart diffing and auto-approvals to keep noisy builds out of your way

For more, explore our Integrations Hub.

This is test automation that moves with your code, not one that slows it down.

Dev Teams Are Reporting…

Here’s what teams have seen after adopting Applitools:

  • Up to 80% reduction in test maintenance overhead
  • 10x faster execution across browsers and devices
  • 70% fewer visual bugs escaping into production
  • Faster code reviews with fewer test-related delays

Whether you’re validating a single feature branch or running thousands of tests in parallel, Applitools is built to support real scale—without compromising on accuracy.

Why This Award Actually Matters

The CIO Review award isn’t about hype. It’s a reflection of what forward-looking engineering teams need from test automation in 2025—More confidence. Less friction. AI that works.

If you’re building modern apps, you deserve modern testing. Applitools gives you a platform that evolves with your code, scales with your team, and delivers confidence without the test fatigue.


Applitools Resources for Developers


Quick Answers

How does Applitools reduce test flakiness in UI automation?

Applitools leverages Visual AI to detect meaningful visual changes, minimizing false positives caused by minor rendering differences. This approach reduces test flakiness and maintenance overhead, allowing developers to focus on actual issues rather than debugging unstable tests.

Can Applitools integrate with my existing test frameworks and CI/CD pipelines?

Yes, Applitools offers seamless integration with popular test frameworks like Selenium, Cypress, Playwright, and Appium. It also supports CI/CD tools such as Jenkins, GitHub Actions, and CircleCI, enabling you to incorporate visual testing into your existing workflows without significant changes. See the integrations.

What is the Ultrafast Grid, and how does it benefit cross-browser testing?

The Ultrafast Grid is Applitools’ cloud-based testing infrastructure that allows you to run visual tests across multiple browsers and devices in parallel. This accelerates cross-browser testing and ensures consistent user experiences across different platforms.

How does Applitools handle dynamic content in applications?

Applitools’ Visual AI intelligently distinguishes between meaningful visual changes and dynamic content variations. It can ignore expected dynamic elements like timestamps or user-specific data, focusing only on unexpected differences that may indicate bugs.

Is coding expertise required to create tests with Applitools?

While Applitools integrates well with code-based test frameworks, it also offers no-code and low-code options through its Autonomous platform. This allows team members with varying technical skills to create and maintain tests, promoting broader collaboration in the testing process. See how Applitools expands test automation across teams.

The post AI Test Automation Platform for Developers: Why Applitools Won in 2025 appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Playwright vs Selenium: What are the Main Differences and Which is Better? https://app14743.cloudwayssites.com/blog/playwright-vs-selenium/ Tue, 24 Sep 2024 14:41:00 +0000 https://app14743.cloudwayssites.com/?p=41852 Wondering how to choose between Playwright vs Selenium for your test automation? Check out a comparison between the two popular test automation tools.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Wondering how to choose between Playwright vs Selenium for your test automation? Read on to see a comparison between the two popular test automation tools.

When it comes to web test automation, Selenium has been the dominant industry tool for several years. However, there are many other automated testing tools on the market. Playwright is a newer tool that has been gaining popularity. How do their features compare, and which one should you choose?

What is Selenium?

Selenium is a long-running open source tool for browser automation. It was originally conceived in 2004 by Jason Huggins, and has been actively developed ever since. Selenium is a widely-used tool with a huge community of users, and the Selenium WebDriver interface even became an official W3C Recommendation in 2018.

The framework is capable of automating and controlling web browsers and interacting with UI elements, and it’s the most popular framework in the industry today. There are several tools in the Selenium suite, including:

  • Selenium WebDriver: WebDriver provides a flexible collection of open source APIs that can be used to easily test web applications
  • Selenium IDE: This record-and-playback tool enables rapid test development for both engineers and non-technical users
  • Selenium Grid: The Grid lets you distribute and run tests in parallel on multiple machines

The impact of Selenium goes even beyond the core framework, as a number of other popular tools, such as Appium and WebDriverIO, have been built directly on top of Selenium’s API.

Selenium is under active development and recently unveiled a major version update to Selenium 4. It supports just about all major browsers and popular programming languages. Thanks to a wide footprint of use and extensive community support, the Selenium open source project continues to be a formidable presence in the browser automation space.

What is Playwright?

Playwright is a relatively new open source tool for browser automation, with its first version released by Microsoft in 2020. It was built by the team behind Puppeteer, which is a headless testing framework for Chrome/Chromium. Playwright goes beyond Puppeteer and provides support for multiple browsers, among other changes.

Playwright is designed for end-to-end automated testing of web apps. It’s cross-platform, cross-browser and cross-language, and includes helpful features like auto-waiting. It is specifically engineered for the modern web and generally runs very quickly, even for complex testing projects.

While far newer than Selenium, Playwright is picking up steam quickly and has a growing following. Due in part to its young age, it supports fewer browsers/languages than Selenium, but by the same token it also includes newer features and capabilities that are more aligned with the modern web. It is actively developed by Microsoft.

Selenium vs Playwright

Selenium and Playwright are both capable web automation tools, and each has its own strengths and weaknesses. Depending on your needs, either one could serve you best. Do you need a wider array of browser/language support? How much does a long track record of support and active development matter to you? Is test execution speed paramount? 

Each tool is open source, cross-language, and developer friendly. Both support CI/CD (via Jenkins, Azure Pipelines, etc.), and advanced features like screenshot testing and automated visual testing. However, there are some key architectural and historical differences between the two that explain some of their biggest differences.

Selenium Architecture and History

  • Architecture: Selenium uses the WebDriver API to interact between web browsers and browser drivers. It operates by translating test cases into JSON and sending them to the browsers, which then execute the commands and send an HTTP response back.
  • History: Selenium has been in continuous operation and development for 18+ years. As a longstanding open source project, it offers broad support for browsers/languages, a wide range of community resources and an ecosystem of support.

Playwright Architecture and History

  • Architecture: Playwright uses a WebSocket connection rather than the WebDriver API and HTTP. This stays open for the duration of the test, so everything is sent on one connection. This is one reason why Playwright’s execution speeds tend to be faster.
  • History: Playwright is fairly new to the automation scene. It is faster than Selenium and has capabilities that Selenium lacks, but it does not yet have as broad a range of support for browsers/languages or community support. It is open source and backed by Microsoft.

Comparing Playwright vs Selenium Features

It’s important to consider your own needs and pain points when choosing your next test automation framework. The table below will help you compare Playwright vs Selenium.

CriteriaPlaywrightSelenium
Browser SupportChromium, Firefox, and WebKit (note: Playwright tests browser projects, not stock browsers)Chrome, Safari, Firefox, Opera, Edge, and IE
Language SupportJava, Python, .NET C#, TypeScript and JavaScript.Java, Python, C#, Ruby, Perl, PHP, and JavaScript
Test Runner Frameworks SupportJest/Jasmine, AVA, Mocha, and VitestJest/Jasmine, Mocha, WebDriver IO, Protractor, TestNG, JUnit, and NUnit
Operating System SupportWindows, Mac OS and LinuxWindows, Mac OS, Linux and Solaris
ArchitectureHeadless browser with event-driven architecture4-layer architecture (Selenium Client Library, JSON Wire Protocol, Browser Drivers and Browsers)
Integration with CIYesYes
PrerequisitesNodeJSSelenium Bindings (for your language), Browser Drivers and Selenium Standalone Server
Real Device SupportNative mobile emulation (and experimental real Android support)Real device clouds and remote servers
Community SupportSmaller but growing set of community resourcesLarge, established collection of documentation and support options
Open SourceFree and open source, backed by MicrosoftFree and open source, backed by large community

Should You Use Selenium or Playwright for Test Automation?

Is Selenium better than Playwright? Or is Playwright better than Selenium? Selenium and Playwright both have a number of things going for them – there’s no easy answer here. When choosing between Selenium vs Playwright, it’s important to understand your own requirements and research your options before deciding on a winner.

Selenium vs Playwright: Let the Code Speak

A helpful way to go beyond lists of features and try to get a feel for the practical advantages of each tool is to go straight to the code and compare real-world examples side by side. At Applitools, our goal is to make test automation easier for you – so that’s what we did! 

In the video below, you can see a head-to-head comparison of Playwright vs Selenium. Angie Jones and Andrew Knight take you through ten rounds of a straight-to-the-code battle, with the live audience deciding the winning framework for each round. Check it out for a unique look at the differences between Playwright and Selenium.

If you like these code battles and want more, we’ve also pitted Playwright vs Cypress and Selenium vs Cypress – check out all our versus battles here.

In fact, our original Playwright vs Cypress battle (recap here) was so popular that we’ve even scheduled our first rematch. Who will win this time? Register for the Playwright vs Cypress Rematch now to join in and vote for the winner yourself!

Learn More about Playwright vs Selenium

Want to learn more about Playwright or Selenium? Keep reading below to dig deeper into the two tools.

What are the main differences between Playwright and Selenium?

Playwright and Selenium are both powerful test automation frameworks, but Playwright is newer and optimized for speed with built-in support for modern browsers, while Selenium has broad compatibility and an established community. These differences make Playwright ideal for high-speed environments and Selenium suitable for extensive browser compatibility.

Is Playwright easier to set up than Selenium?

Yes, Playwright typically offers a simpler setup process with built-in browser support, reducing configuration time. Selenium setup may involve additional components like WebDriver for each browser, making it slightly more complex to configure.

Which tool is better for end-to-end testing?

For end-to-end testing, Selenium’s compatibility with a broader range of browsers makes it a good choice for diverse environments while Playwright offers better speed and newer features.

Can Playwright and Selenium be used together in test automation?

While you can technically use Playwright and Selenium together, most teams choose one based on project needs. Playwright is often chosen for speed, while Selenium is preferred for its broad support and established ecosystem.

Which framework is better for handling complex UI interactions?

Playwright provides a modern API that simplifies complex UI interactions, such as hovering, clicking, and waiting for elements. Selenium also supports these interactions but often requires additional configuration or custom code.

The post Playwright vs Selenium: What are the Main Differences and Which is Better? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Writing Your First Appium Test For Android Mobile Devices https://app14743.cloudwayssites.com/blog/how-to-write-android-test-appium/ Wed, 21 Aug 2024 16:51:00 +0000 https://app14743.cloudwayssites.com/?p=35292 Learn how to create your first Appium test for Android in this comprehensive, step-by-step walkthrough.

The post Writing Your First Appium Test For Android Mobile Devices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

This is the second post in our Hello World introduction series to Appium, and we’ll discuss how to create your first Appium test for Android. You can read the first post where we discussed what Appium is, including its core concepts and how to set up the Appium server. You can also read the next post on setting up your first Appium iOS test.

Key Takeaways

In this post, we’ll build on top of earlier basics and focus on the following areas:

  • Setup Android SDK
  • Setup Android emulator/real devices
  • Setup Appium Inspector
  • Setup our first Android project with framework components

We have lots to cover but don’t worry, by the end of this post, you will have run your first Appium-based Android test. Excited? ✊? Let’s go.

Setup Android SDK

To run Android tests, we need to set up an Android SDK, ADB (Android debug bridge), and some other utilities. 

  • Android SDK will set up Android on your machine and provide you with all the tools required to do the development or in this case automation.
  • Android Debug Bridge (ADB) is a CMD line tool that lets you communicate with the Android device (either emulator or physical device).

The easiest way to set these up is to go to the Android site and download Android Studio (An IDE to develop Android apps), which will install all the desired libraries and also give us everything we need to run our first Android test.

Setup SDK Platforms and Tools

Once downloaded and installed, open Android Studio, click on Configure, and then SDK Manager. Using this you can download any Android API version from SDK Platforms.

Shows Android studio home page and how someone can open SDK Manager by tapping on Configure button

Also, you can install any desired SDK Tools from here. 

We’ll go with the defaults for now. This tool can also install any required updates for these tools which is quite a convenient way of upgrading.

Shows SDK Tools in android studio

Add Android Home to Environment Variables

The Appium server needs to know where the Android SDK and other tools like Emulator, Platform Tools are present to help us run the tests. 

We can do so by adding the variables below in the system environment.

On Mac/Linux:

  • Add the below environment variables in the shell of your choice (.bash_profile for bash or .zshrc for zsh). These are usually the paths where Android Studio installs these.
  • Run source <shell_profile_file_name> for e.g. source.zshrc
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$ANDROID_HOME/emulator:$PATH
export PATH=$ANDROID_HOME/tools:$PATH
export PATH=$ANDROID_HOME/tools/bin:$PATH
export PATH=$ANDROID_HOME/platform-tools:$PATH

If you are on Windows, you’ll need to add the path to Android SDK in the ANDROID_HOME variable under System environment variables.

Once done, run the adb command on the terminal to verify ADB is set up:

➜  appium-fast-boilerplate git:(main) adb
Android Debug Bridge version 1.0.41
Version 30.0.5-6877874
Installed as /Users/gauravsingh/Library/Android/sdk/platform-tools/adb

These are a lot of tedious steps and in case you want to set these up quickly, you can execute this excellent script written by Anand Bagmar.

Set up an Android Emulator or Real Device

Our Android tests will run either on an emulator or a real Android device plugged in. Let’s see how to create an Android emulator image.

  • Open Android Studio
  • Click Configure
  • Click on AVD Manager

Shows Android studio home page and how someone can open AVD Manager by tapping on Configure button

You’ll be greeted with a blank screen with no virtual devices listed. Tap on Create a virtual device to launch the Virtual device Configuration flow:

Shows blank virtual devices screen with create virtual device button

Next, select an Android device like TV, Phone, Tablet, etc., and the desired size and resolution combination. 

It’s usually a good idea to set up an emulator with Play Store services available (see the Play Store icon under the column) as certain apps might need the latest play services to be installed.

We’ll go with Pixel 3a with Play Store available.

Shows device definition screen to select a device model

Next, we’ll need to select which Android version this emulator should have. You can choose any of the desired versions and download the image. We’ll choose Android Q (10.0 version).

Shows Android OS system image with option to Download and select one

You need to give this image a name. We’ll need to use this later in Appium capabilities so you can give any meaningful name or go with the default. We’ll name it Automation.

Shows Android emulator configuration with AVD Name

Nice, ✋?We have created our emulator. You can fire it up by tapping the Play icon under the Actions section.

Android Virtual Device Manager screen with and emulator named Automation

You should see an emulator boot up on your device similar to a real phone. 

Setting up a Real Device

While the emulator is an in-memory image of the Android OS that you can quickly spin up and destroy, it does take physical resources like Memory, RAM, and CPU. It’s always a good idea to verify your app on a real device.

We’ll see how to set up a real device so that we can run automation on it.

You need to connect your device to your machine via USB. Once done:

  • Go to About Phone.
  • Tap on the Android build number multiple times until developer mode is enabled, you’ll see a Toast message show up as you get closer to enabling this mode.
  • Go to Developer Options and Enable USB Debugging.

USB Debugging under Developer Options

And thats all you need to potentially run our automation on a connected real device.

Setting up Appium Inspector

Appium comes with a nifty inspector desktop app that can inspect your Application under test and help you identify element locators (i.e. ways to identify elements on the screen) and even play around with the app. 

It can easily run on any running Appium server and is really a cool utility to help you identify element locators and develop Appium scripts.

Download and Install Appium Inspector

You can download it by just going to the Appium Github repo, and searching for appium-inspector.

Appium Inspector Github page with releases option highlighted

Go to release and find the latest .dmg (on Mac) or .exe (on Windows) to download and install.

Appium Inspector Github releases page with mac .dmg file highlighted

On Mac, you may get a warning stating: “Appium Inspector” can’t be opened because Apple cannot check it for malicious software. 

 Apple warning for malicious software

To mitigate this, just go to System preferences > Security & Privacy > General and say Open Anyway for Appium Inspector. For more details see Issue 1217.

Give open anyway permission to Appium in System preferences

Setting up Desired Capabilities

Once you launch Appium inspector, you’ll be greeted with the below home screen. If you see there is a JSON representation sector under Desired Capabilities section.

 Steps highlighting how we can add our desired caps and save this config

What are the Desired Capabilities?

Think of them as properties that you want your driver’s session to have. For example, you may want to reset your app after your test or launch the app in a particular orientation. These are all achievable by specifying a Key-Value pair in a JSON that we provide to the Appium server session. 

Please see Appium docs for more ideas on these capabilities.

Below are some sample capabilities we can give for Android:

{
    "platformName": "android",
    "automationName": "uiautomator2",
    "platformVersion": "10",
    "deviceName": "Automation",
    "app": "/<absolute_path_to_app>/ApiDemos-debug.apk",
    "appPackage": "io.appium.android.apis",
    "appActivity": "io.appium.android.apis.ApiDemos"
}

For this post, we’ll use the sample apps provided by Appium. You can see them here, once you’ve downloaded them. Keep track of its absolute path and update it in-app key in the JSON.

We’ll add this JSON under JSON representation and then tap on the Save button.

Saved caps in grid

It would be a good idea to save this config for the future. You can tap on ‘Save as’ and give it a meaningful name.

 Saving and giving caps a name

Starting the Inspector Session

To start the inspection session, You need to have an Appium server run locally, run it via typing appium on the command line:

➜  appium-fast-boilerplate git:(main) appium
[Appium] Welcome to Appium v2.0.0-beta.25
[Appium] Attempting to load driver uiautomator2...
[Appium] Appium REST http interface listener started on 0.0.0.0:4723
[Appium] Available drivers:
[Appium]   - uiautomator2@2.0.1 (automationName 'UiAutomator2')
[Appium] No plugins have been installed. Use the "appium plugin" command to install the one(s) you want to use.

Let’s make sure our emulator is also up and running. We can start the emulator via AVD Manager in Android studio, or in case you are more command-line savvy, I have written an earlier post on how to do this via CMDline as well.

Once done, tap on the Start Session button, this should launch the API Demos app and show the inspector home screen.

Appium inspector home screen with controls highlighted

Using this you can tap on any element in the app and see its Element hierarchy and elements properties. This is very useful to author Appium scripts and I’ll encourage you to explore each section of this tool and get familiar since you’ll be using this a lot.

Writing Our First Android Test Using Appium

Phew, that seemed like a lot of setup steps but don’t worry, you only have to do this once, and now we can get down to the real business of writing our first Automated test on Android.

You can download the project using Github from appium-fast-boilerplate.

We’ll also understand what the fundamental concepts of writing page objects and tests are based on these. Let’s take a look at the high-level architecture.

High-level architecture of an Android test written using Appium

What Would This Test Do?

Before automating any test, we need to be clear on what is the purpose of that test. I’ve found the Arrange Act Assert pattern quite useful to reason about it. Read this post by Andrew Knight in case you are interested to know more about it.

Our test would perform the below:

  • Opens API Demos app
  • Taps around and navigates to Log Text box and taps on Add once
  • Verify the log text is displayed

Our Test Class

Let’s start by seeing our test.

import constants.TestGroups;
import org.testng.Assert;
import org.testng.annotations.Test;
import pages.apidemos.home.APIDemosHomePage;

public class AndroidTest extends BaseTest {

    @Test(groups = {TestGroups.ANDROID})
    public void testLogText() {
        String logText = new APIDemosHomePage(this.driver)
                .openText()
                .tapOnLogTextBox()
                .tapOnAddButton()
                .getLogText();

        Assert.assertEquals(logText, "This is a test");
    }
}

There are a few things to notice above:

public class AndroidTest extends BaseTest

Our class extends a BaseTest, this is useful since we can perform common setup and tear-down functions, including setting up driver sessions and closing them once our script is done. 

This ensures that the tests are as simple as possible and does not overload the reader with any more details than they need to see.

String logText = new APIDemosHomePage(this.driver)
                .openText()
                .tapOnLogTextBox()
                .tapOnAddButton()
                .getLogText();

We see our tests read like plain English with a series of actions following each other. This is called a Fluent pattern and we’ll see how this is set up in just a moment.

Base Test and Driver Setup

Let’s see our BaseTest class:

import constants.Target;
import core.driver.DriverManager;
import core.utils.PropertiesReader;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.testng.ITestContext;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;

import java.io.IOException;

public class BaseTest {
    protected AppiumDriver driver;
    protected PropertiesReader reader = new PropertiesReader();

    @BeforeMethod(alwaysRun = true)
    public void setup(ITestContext context) {
        context.setAttribute("target", reader.getTarget());

        try {
            Target target = (Target) context.getAttribute("target");
            this.driver = new DriverManager().getInstance(target);
        } catch (IOException | PlatformNotSupportException e) {
            e.printStackTrace();
        }
    }

    @AfterMethod(alwaysRun = true)
    public void teardown() {
        driver.quit();
    }
}

Let’s unpack this class.

protected AppiumDriver driver;

We set our driver instance as protected so that all test classes will have access to it.

protected PropertiesReader reader = new PropertiesReader();

We create an instance of PropertiesReader class to read relevant properties. This is useful since we want to be able to switch our driver instances based on different test environments and conditions. If curious, please see its implementation here.

Target target = (Target) context.getAttribute("target");
this.driver = new DriverManager().getInstance(target);

We get the relevant Target and then using that gets an instance of AppiumDriver from a class called DriverManager.

Driver Manager to Setup Appium Driver

We’ll use this reusable class to:

  • Read capabilities JSON file based on platform (Android/iOS)
  • Setup a local driver instance with these capabilities
  • This class could independently evolve to setup desired driver instances, either local, the house or remote cloud lab
package core.driver;

import constants.Target;
import exceptions.PlatformNotSupportException;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.remote.DesiredCapabilities;

import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;

import static core.utils.CapabilitiesHelper.readAndMakeCapabilities;

public class DriverManager {
    private static AppiumDriver driver;
    // For Appium < 2.0, append /wd/hub to the APPIUM_SERVER_URL
    String APPIUM_SERVER_URL = "http://127.0.0.1:4723";

    public AppiumDriver getInstance(Target target) throws IOException, PlatformNotSupportException {
        System.out.println("Getting instance of: " + target.name());
        switch (target) {
            case ANDROID:
                return getAndroidDriver();
            case IOS:
                return getIOSDriver();
            default:
                throw new PlatformNotSupportException("Please provide supported target");
        }
    }

    private AppiumDriver getAndroidDriver() throws IOException {
        HashMap map = readAndMakeCapabilities("android-caps.json");
        return getDriver(map);
    }

    private AppiumDriver getIOSDriver() throws IOException {
        HashMap map = readAndMakeCapabilities("ios-caps.json");
        return getDriver(map);
    }

    private AppiumDriver getDriver(HashMap map) {
        DesiredCapabilities desiredCapabilities = new DesiredCapabilities(map);

        try {
            driver = new AppiumDriver(
                    new URL(APPIUM_SERVER_URL), desiredCapabilities);
        } catch (MalformedURLException e) {
            e.printStackTrace();
        }

        return driver;
    }
}

You can observe:

  • getInstance method takes a target and based on that tries to get either an Android or an IOS Driver. In the future, if we want to run our tests against a cloud provider like Headspin, SauceLabs, BrowserStack, Applitools, then this class could handle creating the relevant session.
  • Both getAndroidDriver and getIOSDriver read a JSON file with similar capabilities as we saw in the Inspector section and then convert it into a Java Hashmap which could be passed into getDriver method that returns an Appium Instance. With this, we could later pass say an environmental context and based on that choose a different capabilities file.

A Simple Page Object with the Fluent Pattern

Let’s take a look at the example of a page object that enables a Fluent pattern.

package pages.apidemos.home;

import core.page.BasePage;
import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import pages.apidemos.logtextbox.LogTextBoxPage;

public class APIDemosHomePage extends BasePage {
    private final By textButton = By.xpath("//android.widget.TextView[@content-desc=\"Text\"]");
    private final By logTextBoxButton = By.xpath("//android.widget.TextView[@content-desc=\"LogTextBox\"]");

    public APIDemosHomePage(AppiumDriver driver) {
        super(driver);
    }

    public APIDemosHomePage openText() {
        WebElement text = getElement(textButton);
        click(text);

        return this;
    }

    public LogTextBoxPage tapOnLogTextBox() {
        WebElement logTextBoxButtonElement = getElement(logTextBoxButton);
        waitForElementToBeVisible(logTextBoxButtonElement);

        click(logTextBoxButtonElement);

        return new LogTextBoxPage(driver);
    }
}

Notice the following:

Above is an example page object class:

  • With 2 XPath locators defined with By clause
  • We init the driver in BasePage class
  • The test actions return either this (i.e. current page) or next page object to enable a Fluent pattern. Following this, we can create a sort of action graph where any action that a test takes connects to the next set of actions this page can take and makes writing test scripts a breeze.

Base Page Class

package core.page;

import io.appium.java_client.AppiumDriver;
import org.openqa.selenium.By;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;

import java.util.List;

public class BasePage {
    protected AppiumDriver driver;

    public BasePage(AppiumDriver driver) {
        this.driver = driver;
    }

    public void click(WebElement elem) {
        elem.click();
    }

    public WebElement getElement(By by) {
        return driver.findElement(by);
    }

    public List<WebElement> getElements(By by) {
        return driver.findElements(by);
    }

    public String getText(WebElement elem) {
        return elem.getText();
    }

    public void waitForElementToBeVisible(WebElement elem) {
        WebDriverWait wait = new WebDriverWait(driver, 10);
        wait.until(ExpectedConditions.visibilityOf(elem));
    }

    public void waitForElementToBePresent(By by) {
        WebDriverWait wait = new WebDriverWait(driver, 10);
        wait.until(ExpectedConditions.presenceOfElementLocated(by));
    }

    public void type(WebElement elem, String text) {
        elem.sendKeys(text);
    }
}

Every page object inherits from a BasePage that wraps Appium methods.

  • This provides us with an abstraction and allows us to create our own project-specific reusable actions and methods. This is a very good pattern to follow for a few reasons
    • Say we want to provide some custom functionality like adding a logger line in our own logging infrastructure whenever an Appium action is performed, we can wrap those methods and perform this.
    • Also if in the future Appium breaks an API, this change is not cascaded to all our page objects, but rather only to this class where we can handle it in an appropriate manner to provide backward compatibility.
  • A word of caution ⚠: Do not dump every method in this class, try to compose only relevant actions.

Congratulations, you’ve written your first Appium Android test. You can run this either via the IDE or via a Gradle command

./gradlew clean build runTests -Dtag="ANDROID" -Ddevice="ANDROID"

Conclusion

You can see the entire project on the Github appium-fast-boilerplate, clone it, and play around with it. Hopefully, this post helps you a little bit in starting on Android automation using Appium. 

In the next post, we’ll dive into the world of iOS Automation with Appium and write our first Hello World test.

You could also check out https://automationhacks.io for other posts that I’ve written about software engineering and testing. 

As always, do share this with your friends or colleagues and if you have thoughts or feedback, I’d be more than happy to chat over at Twitter or elsewhere. Until next time. Happy testing and coding.

What is Appium, and why is it popular for Android testing?

Appium is an open-source tool for automating mobile apps on Android and iOS, known for its flexibility and support across different platforms. It enables testers to write tests in multiple programming languages, making it ideal for cross-platform testing.

How does Appium help automate Android app testing?

Appium allows testers to interact with Android apps just as users would, automating various actions and verifying that the app works correctly. This streamlines testing, improves efficiency, and ensures consistent functionality across devices.

What are the prerequisites for using Appium to test Android apps?

To use Appium for Android testing, you’ll need Java (or another language compatible with Appium), an IDE like IntelliJ, the Appium server, and Android Studio or SDK. Setting up these tools enables you to automate and execute tests on Android applications.

What types of tests can be run on Android apps using Appium?

Appium supports various types of tests, including functional, UI, and end-to-end tests, to validate the app’s performance and user experience. This comprehensive testing helps ensure the app works smoothly for end users.

What are the benefits of using Appium Inspector in Android testing?

Appium Inspector allows testers to view the app’s element hierarchy, identify UI elements, and generate code for locating elements, simplifying test creation. This tool enhances test accuracy and speeds up test development.

The post Writing Your First Appium Test For Android Mobile Devices appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
What is Visual Testing? https://app14743.cloudwayssites.com/blog/visual-testing/ https://app14743.cloudwayssites.com/blog/visual-testing/#respond Wed, 07 Aug 2024 16:41:12 +0000 https://app14743.cloudwayssites.com/blog/?p=5069 Visual testing evaluates the visible output of an application and compares that output against the results expected by design. You can run visual tests at any time on any application with a visual user interface.

The post What is Visual Testing? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Visual testing

Learn what visual testing is, why visual testing is important, how to catch visual bugs, the differences between visual and functional testing, and how you can get started with automated visual testing today.

What is Meant By Visual Testing?

Visual testing evaluates the visible output of an application and compares that output against the results expected by design. In other words, it helps catch “visual bugs” in the appearance of a page or screen, which are distinct from strictly functional bugs. Automated visual testing tools, like Applitools, can help speed this visual testing up and reduce errors that occur with manual verification.

You can run visual tests at any time on any application with a visual user interface. Most developers run visual tests on individual components during development, and on a functioning application during end-to-end tests.

In today’s world, in the world of HTML, web developers create pages that appear on a mix of browsers and operating systems. Because HTML and CSS are standards, front-end developers want to feel comfortable with a ‘write once, run anywhere’ approach to their software. Which also translates to “Let QA sort out the implementation issues.” QA is still stuck checking each possible output combination for visual bugs.

This explains why, when I worked in product management, QA engineers would ask me all the time, “Which platforms are most important to test against?” If you’re like most QA team members, your test matrix has probably exploded: multiple browsers, multiple operating systems, multiple screen sizes, multiple fonts—and dynamic responsive content that renders differently on each combination.

If you are with me so far, you’re starting to answer the question: why do visual testing?

Why is Visual Testing Important?

We do visual testing because visual errors happen—more frequently than you might realize. Take a look at this visual bug on Instagram’s app:

The text and ad are crammed together. If this was your ad, do you think there would be a revenue impact? Absolutely.

Visual bugs happen at other companies too: Amazon. GoogleSlack. Robin Hood. Poshmark. Airbnb. Yelp. Target. Southwest. United. Virgin Atlantic. OpenTable. These aren’t cosmetic issues. In each case, visual bugs are blocking revenue.

If you need to justify spending money on visual testing, share these examples with your boss.

All these companies are able to hire some of the smartest engineers in the world. If it happens to Google, Instagram, or Amazon, it probably can happen to you, too.

Why do these visual bugs occur? Don’t they do functional testing? They do — but it’s not enough.

Visual bugs are rendering issues. Rendering validation is not what functional testing tools are designed to catch. Functional testing measures functional behavior.

Why can’t functional tests cover visual issues?

Sure, functional test scripts can validate the size, position, and color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to checkpoint bloat.

To see what I mean, let’s look at an Instagram ad screen that’s properly rendered. There are 21 visual elements by my count—various icons, and text. (This ignores iOS elements at the top like WiFi signal and time, since those aren’t controlled by the Instagram app.)


If you used traditional checkpoints in a functional testing tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium, you’d have to check the following for each of those 21 visual elements:

  1. Visible (true/false)
  2. Upper-left x,y coordinates
  3. Height
  4. Width
  5. Background color

That means you’d need the following number of assertions:

21 visual elements x 5 assertions per element = 105 lines of assertion code

Even with all this assertion code, you wouldn’t be able to detect all visual bugs. Such as whether a visual element can’t be accessed because it’s being covered up, which blocked revenue in the above examples from Yelp, Southwest, United, and Virgin Atlantic. And, you’d miss subtleties like the brand logo, or the red dot under the heart.

But it gets worse: if OS, browser, screen orientation, screen size, or font size changes, your app’s appearance will change as a result. That means you have to write another 105 lines of functional test assertions. For EACH combination of OS/browser/font size/screen size/screen orientation/font size.

You could end up with thousands of lines of assertion code — any of which might need to change with a new release. Trying to maintain that would be sheer madness. No one has time for that.

You need visual testing because visual errors occur. And you need visual testing because you cannot rely on functional tests to catch visual errors.

What is Manual Visual Testing?

Because automated functional testing tools are poorly suited for finding visual bugs, companies find visual glitches using manual testers. Lots of them (more on that in a bit).

For these manual testers, visual testing behaves a lot like this spot-the-difference game:

To understand how time-consuming visual testing can be, get out your phone and time how long it takes for you to find all six visual differences. I took a minute to realize that the writing in the panels doesn’t count. It took me about 3 minutes to spot all six. Or, you can cheat and look at the answers.

Why does it take so long? Some differences are difficult to spot. In other cases, our eyes trick us into finding differences that don’t exist.

Manual visual testing means comparing two screenshots, one from your known good baseline image, and another from the latest version of your app. For each pair of images, you have to invest time to ensure you’ve caught all issues. Especially if the page is long, or has a lot of visual elements. Think “Where’s Waldo”…

Challenges of manual testing

If you’re a manual tester or someone who manages them, you probably know how hard it is to visually test.

If you are a test engineer reading this paragraph, you already know this: web page testing only starts with checking the visual elements and their function on a single operating system, browser, browser orientation, and browser dimension combination. Then continue on to other combinations. And, that’s where a huge amount of test effort lies – not in the functional testing, but in the inspection of visual elements across the combination of an operating system, browser, screen orientation, and browser dimensions.

To put it in perspective, imagine you need to test your app on:

  • 5 operating systems: Windows, MacOS, Android, iOS, and Chrome.
  • 5 popular browsers: Chrome, Firefox, Internet Explorer (Windows only) Microsoft Edge (Windows Only), and Safari (Mac only).
  • 2 screen orientations for mobile devices: portrait and landscape.
  • 10 standard mobile device display resolutions and 18 standard desktop/laptop display resolutions from XGA to 4G.

If you’re doing the math, you think it’s the browsers running on each platform (a total of 21 combinations) multiplied by the two orientations of the ten mobiles (2×10)=20 added to the 18 desktop display resolutions.

21 x (20+18) = 21 x 38 = 798 Unique Screen Configurations to test

That’s a lot of testing—for just one web page or screen in your mobile app.

Except that it’s worse. Let’s say your app has 100 pages or screens to test.

798 Screen Configurations x 100 Screens in-app = 79,800 Screen Configurations to test

Meanwhile, companies are releasing new app versions into production as frequently as once a week, or even once a day.

How many manual testers would you need to test 79,800 screen configurations in a week? Or a day? Could you even hire that many people?

Wouldn’t it be great if there was a way to automate this crazy-tedious process?

Well, yes there is…

What is Automated Visual Testing?

Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover visual defects.

Automated visual testing piggybacks on your existing functional test scripts running in a tool like Selenium Webdriver, Cypress, WebdriverIO, or Appium. As your script drives your app, your app creates web pages with static visual elements. Functional testing changes visual elements, so each step of a functional test creates a new UI state you can visually test.

Automated visual testing evolved from functional testing. Rather than descending into the madness of writing assertions to check the properties of each visual element, automated visual testing tools visually check the visual appearance of an entire screen with just one assertion statement. This leads to test scripts that are MUCH simpler and easier to maintain.

But, if you’re not careful, you can go down an unproductive rat hole. I’m talking about Snapshot Testing.

What is Snapshot Testing?

First-generation automated visual testing uses a technology called snapshot testing. With snapshot testing, a bitmap of a screen is captured at various points of a test run and its pixels are compared to a baseline bitmap.

Snapshot testing algorithms are very simplistic: iterate through each pixel pair, then check if the color hex code is the same. If the color codes are different, raise a visual bug.

Because they can be built relatively easily, there are a number of open-source and commercial snapshot testing tools. Unlike human testers, snapshot testing tools can spot pixel differences quickly and consistently. And that’s a step forward. A computer can highlight the visual differences in the Hocus Focus cartoon easily. A number of these tools market themselves as enabling “pixel-perfect testing”.

Sounds like a good idea, right?

What are the Problems With Snapshot Testing?

Alas, pixels aren’t visual elements. Font smoothing algorithms, image resizing, graphics cards, and even rendering algorithms generate pixel differences. And that’s just static content. The actual content can vary between any two interfaces. As a result, a tool that expects exact pixel matches between two images can be filled with pixel differences.

If you want to see some examples of bitmap differences affecting snapshot testing, take a look at the blog post we wrote on this topic last year.

Unfortunately, while you might think snapshot testing makes intuitive sense, practitioners like you are finding that the conditions for running successful bitmap comparisons require a stationary target, while your company continues to develop dynamic websites across a range of browsers and operating systems. You can try to force your app to behave a certain way – but you may not always succeed.

Can you share some details of Snapshot Testing Problems?

For example, when testing on a single browser and operating system:

  • Identify and isolate (mute) fields that change over time, such as radio signal strength, battery state, and blinking cursors.
  • Ignore user data that might otherwise change over time, such as visitor count.
  • Determine how to support testing content on your site that must change frequently – especially if you are a media company or have an active blog.
  • Consider how different hardware or software affects antialiasing.

When doing cross-browser testing, you must also consider:

  • Text wrapping, because you cannot guarantee the locations of text wrapping between two browsers using the same specifications. The text can break differently between two browsers, even with identical screen sizes.
  • Image rendering software, which can affect the pixels of font antialiasing as well as images and can vary from browser to browser (and even on a single browser among versions).
  • Image rendering hardware, which may render bitmaps differently.
  • Variations in browser font size and other elements that affect the text.

If you choose to pursue snapshot testing in spite of these issues, don’t be surprised if you end up joining the group of experienced testers who have tried, and then ultimately abandoned, snapshot testing tools.

Can I See Some Snapshot Testing Problems In Real Life?

Here are some quick examples of these real-life bitmap issues.

If you use pixel testing for mobile apps, you’ll need to deal with the very dynamic data at the top of nearly every screen: network strength, time, battery level, and more:

When you have dynamic content that shifts over time — news, ads, user-submitted content — where you want to check to ensure that everything is laid out with proper alignment and no overlaps. Pixel comparison tools can’t test for these cases. Twitter’s user-generated content is even more dynamic, with new tweets, likes, retweets, and comment counts changing by the second.

Your app doesn’t even need to change to confuse pixel tools. If your baselines and test screenshots were captured on different machines with different display settings for anti-aliasing, that can turn pretty much the entire page into a false positive, like this:

Source: storybook.js.org

If you’re using pixel tools and you still have to track down false positives and expose false negatives, what does that say about your testing efficiency?

For these reasons, many companies throw out their pixel tools and go back to manual visual testing, with all of its issues.

There’s a better alternative: using AI—specifically computer vision—for visual testing.

How Do I Use AI for Automated Visual Testing?

The current generation of automated visual testing uses a class of artificial intelligence algorithms called computer vision as a core engine for visual comparison. Typically these algorithms are used to identify objects with images, such as with facial recognition. We call them visual AI testing tools.

AI-powered automated visual testing combines a learning algorithm to interpret the relationship between a rendered page and the intended display of visual elements with actual visual elements and locations. Like pixel tools, AI-powered automated visual testing takes page snapshots as your functional tests run. Unlike pixel-based comparators, AI-powered automated visual test tools use algorithms instead of pixels to determine when errors have occurred.

Unlike snapshot testers, AI-powered automated visual testing tools do not need special environments that remain static to ensure accuracy. Testing and real-world customer data show that AI testing tools have a high degree of accuracy even with dynamic content because the comparisons are based on relationships and not simply pixels.

Here’s a comparison of the kinds of issues that AI-powered visual testing tools can handle compared to snapshot testing tools:

Visual Testing Use CaseSnapshot TestingVisual AI
Cross-browser testingNoYes
Account balancesNoYes
Mobile device status barsNoYes
News contentNoYes
Ad contentNoYes
User submitted contentNoYes
Suggested contentNoYes
Notification iconsNoYes
Content shiftsNoYes
Mouse hoversNoYes
CursorsNoYes
Anti-aliasing settingsNoYes
Browser upgradesNoYes

Some AI-powered test tools have been tested at a false positive rate of 0.001% (or 1 in every 100,000 errors).

AI-Powered Test Tools In Action

An AI-powered automated visual testing tool can test a wide range of visual elements across a range of OS/browser/orientation/resolution combinations. Just running the first baseline of rendering and functional test on a single combination is sufficient to guide an AI-powered tool to test results across the range of potential platforms

Here are some examples of how AI-powered automated visual testing improves visual test results by awareness of content.

This is a comparison of two different USA Today homepage images. When an AI-powered tool looks at the layout comparison, the layout framework matters, not the content. Layout comparison ignores content differences; instead, layout comparison validates the existence of the content and relative placement. Compare that with a bitmap comparison of the same two pages (also called “exact comparison:):

Literally, every non-whites pace (and even some of the white space) is called out.

Which do you think would be more useful in your validation of your own content?

When Should I Use Visual Testing?

You can do automated visual testing with each check-in of front-end code, after unit testing and API testing, and before functional testing — ideally as part of your CI/CD pipeline running in Jenkins, Travis, or another continuous integration tool.

How often? On days ending with “y”. 🙂

Because of the accuracy of AI-powered automated visual testing tools, they can be deployed in more than just functional and visual testing pre-production. AI-powered automated visual testing can help developers understand how visual element components will render across various systems. In addition to running in development, test engineers can also validate new code against existing platforms and new platforms against running code.

AI-powered tools like Applitools allow different levels of smart comparison.

AI-powered visual testing tools are a key validation tool for any app or web presence that requires regular changes in content and format. For example, media companies change their content as frequently as twice per hour and use AI-powered automated testing to isolate real errors that affect paying customers without impacting them. AI-powered visual test tools are key tools in the test arsenal for any app or web presence going through brand revision or merger, as the low error rate and high accuracy let companies identify and fix problems associated with major DOM, CSS, and Javascript changes that are core to those updates.

Talk to Applitools

Applitools is the pioneer and leading vendor in AI-powered automated visual testing. Applitools has a range of options to help you become incredibly productive in application testing. We can help you test components in development. We can help you find the root cause of the visual errors you have encountered. And, we can run your tests on an Ultrafast Grid that allows you to recreate your visual test in one environment across a number of others on various browser and OS configurations. Our goal is to help you realize the vision we share with our customers – you need to create functional tests for only one environment and let Applitools run the validation across all your customer environments after your first test has passed. We’d love to talk testing with you – feel free to reach out to contact us anytime.

More To Read About Visual Testing

If you liked reading this, here are some more Applitools posts and webinars for you.

  1. Visual Testing for Mobile Apps by Angie Jones
  2. Visual Assertions – Hype or Reality? – by Anand Bagmar
  3. The Many Uses of Visual Testing by Angie Jones
  4. Visual UI Testing as an Aid to Functional Testing by Gil Tayar
  5. Visual Testing: A Guide for Front End Developers by Gil Tayar

Find out more about Applitools. Set up a live demo with us, or if you’re the do-it-yourself type, sign up for a free Applitools account and follow one of our tutorials.

Quick Answers

How does visual testing differ from functional testing?

Functional testing checks if features work as expected, while visual testing verifies that the UI displays correctly. Together, they ensure both the functionality and appearance of an application meet quality standards.

How does automated visual testing work?

Automated visual testing captures screenshots of the application’s UI and compares them against baseline images to detect visual differences. When changes are identified, the tool flags them for review, making it easy to spot unintended UI shifts. However, a tool like Applitools also incorporates AI to intelligently detect changes, distinguishing between acceptable design variations and real bugs.

Can visual testing help prevent regression issues?

Yes, visual testing helps prevent visual regressions by catching unintended UI changes after code updates. This ensures the UI remains consistent and functional across releases, reducing the risk of visual bugs reaching production.

What types of issues can visual testing detect?

Visual testing detects issues such as misaligned elements, missing images, font changes, and color discrepancies. It’s essential for maintaining design accuracy and preventing visual bugs that impact user experience.

Why should teams adopt visual testing as part of their quality assurance process?

Visual testing catches UI bugs that functional tests might miss, ensuring a high-quality, visually consistent user experience. Incorporating visual testing helps teams maintain design integrity and detect visual regressions early in development.

The post What is Visual Testing? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/visual-testing/feed/ 0
Power Up Your Test Automation with Playwright https://app14743.cloudwayssites.com/blog/power-up-your-test-automation-with-playwright/ Thu, 31 Aug 2023 12:53:00 +0000 https://app14743.cloudwayssites.com/?p=52108 As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust...

The post Power Up Your Test Automation with Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Locator Strategies with Playwright

As a test automation engineer, finding the right tools and frameworks is crucial to building a successful test automation strategy. Playwright is an end-to-end testing framework that provides a robust set of features to create fast, reliable, and maintainable tests.

In a recent webinar, Playwright Ambassador and TAU instructor Renata Andrade shared several use cases and best practices for using the framework. Here are some of the most valuable takeaways for test automation engineers:

Use Playwright’s built-in locators for resilient tests.
Playwright recommends using attributes like “text”, “aria-label”, “alt”, and “placeholder” to find elements. These locators are less prone to breakage, leading to more robust tests.

Speed up test creation with the code generator.
The Playwright code generator can automatically generate test code for you. This is useful when you’re first creating tests to quickly get started. You can then tweak and build on the generated code.

Debug tests and view runs with UI mode and the trace viewer.
Playwright’s UI mode and VS Code extension provide visibility into your test runs. You can step through tests, pick locators, view failures, and optimize your tests. The trace viewer gives you a detailed trace of all steps in a test run, which is invaluable for troubleshooting.

Add visual testing with Applitools Eyes.
For complete validation, combine Playwright with Applitools for visual and UI testing. Applitools Eyes catches unintended changes in UI that can be missed by traditional test automation.

Handle dynamic elements with the right locators.
Use a combination of attributes like “text”, “aria-label”, “alt”, “placeholder”, CSS, and XPath to locate dynamic elements that frequently change. This enables you to test dynamic web pages.

Set cookies to test personalization.
You can set cookies in Playwright to handle scenarios like A/B testing where the web page or flow differs based on cookies. This is important for testing personalization on websites.

Playwright provides a robust set of features to build, run, debug, and maintain end-to-end web tests. By leveraging the use cases and best practices shared in the webinar, you can power up your test automation and build a successful testing strategy using Playwright. Watch the full recording and see the session materials.

The post Power Up Your Test Automation with Playwright appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Selenium 4: Chrome DevTools Protocol [What’s New] https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/ https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/#respond Sun, 02 Jul 2023 21:38:00 +0000 https://app14743.cloudwayssites.com/?p=24506 In this post, we will discuss one of the most anticipated features of Selenium 4 which is the new APIs for CDP (Chrome DevTools Protocol)! This addition to the framework provides a much greater control over the browser used for testing.

The post Selenium 4: Chrome DevTools Protocol [What’s New] appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In the previous post of the Selenium 4 blog series, we discussed some of the new features in Selenium 4.

In this post, we will discuss one of the most anticipated features of Selenium 4 which is the new APIs for CDP (Chrome DevTools Protocol)! This addition to the framework provides a much greater control over the browser used for testing.

I’ll share some of the capabilities of the Selenium 4 CDP APIs as well as practical use cases that can take our tests to the next level!

“The getDevTools() method returns the new DevTools object which allows you to send() the built-in Selenium commands for CDP. These commands are wrapper methods that make it cleaner and easier to invoke CDP functions.”

Shama ugale

But first, what is Chrome DevTools?

Introduction to Chrome DevTools

Chrome DevTools is a set of tools built directly into Chromium-based browsers like Chrome, Opera, and Microsoft Edge to help developers debug and investigate websites.

With Chrome DevTools, developers have deeper access to the website and are able to:

  • Inspect Elements in the DOM
  • Edit elements and CSS on the fly
  • Check and monitor the site’s performance
  • Mock geolocations of the user
  • Mock faster/slower networks speeds
  • Execute and debug JavaScript
  • View console logs
  • and so much more

Selenium 4 Chrome DevTools APIs

Selenium 4 has added native support for Chrome DevTools APIs. With these new APIs, our tests can now:

  • Capture and monitor the network traffic and performance
  • Mock geolocations for location-aware testing, localization, and internationalization
  • Change the device mode and exercise the responsiveness of the application

And that’s just the tip of the iceberg!

Selenium 4 introduces the new ChromiumDriver class, which includes two methods to access Chrome DevTools: getDevTools() and executeCdpCommand().

The getDevTools() method returns the new DevTools object which allows you to send() the built-in Selenium commands for CDP. These commands are wrapper methods that make it cleaner and easier to invoke CDP functions.

The executeCdpCommand() method also allows you to execute CDP methods but in a more raw sense. It does not use the wrapper APIs but instead allows you to directly pass in a Chrome DevTools command and the parameters for that command. The executeCdpCommand() can be used if there isn’t a Selenium wrapper API for the CDP command, or if you’d like to make the call in a different way than the Selenium APIs provide.

The Chromium-based drivers such as ChromeDriver and EdgeDriver now inherit from ChromiumDriver, so you also have access to the Selenium CDP APIs from these drivers as well.

Let’s explore how we can utilize these new Selenium 4 APIs to solve various use cases.

Simulating Device Mode

Most of the applications we build today are responsive to cater to the needs of the end users coming from a variety of platforms, devices like phones, tablets, wearable devices, desktops  and orientations.

As testers, we might want to place our application in various dimensions to trigger the responsiveness of the application.  

How can we use Selenium’s new CDP functionality to accomplish this?

The CDP command to modify the device’s metrics is Emulation.setDeviceMetricsOverride, and this command requires input of width, height, mobile, and deviceScaleFactor. These four keys are mandatory for this scenario, but there are several optional ones as well.

In our Selenium tests, we could use the DevTools::send() method using the built-in setDeviceMetricsOverride() command, however, this Selenium API accepts 12 arguments – the 4 that are required as well as 8 optional ones. For any of the 8 optional arguments that we don’t need to send, we can pass Optional.empty().

However, to streamline this a bit by only passing the required parameters, I’m going to use the raw executeCdpCommand() instead as shown in the code below.

View the code on Gist.

On line 19, I create a Map with the required keys for this command.

Then on line 26, I call the executeCdpCommand() method and pass two parameters: the command name as “Emulation.setDeviceMetricsOverride” and the device metrics Map with the parameters.

On line 27, I open the “Google” homepage which is rendered with the specifications I provided as shown in the figure below.

pasted image 0 8

With a solution like Applitools Eyes, we can not only rapidly test across these different viewports with these new Selenium commands, but also maintain any inconsistencies at scale. Eyes is intelligent enough to not report false positives for small, imperceivable changes in the UI resulting from different browsers and viewports.

Simulate Network Speed

Many users access web applications via handheld devices which are connected to wifi or cellular networks. It’s not uncommon to reach a weak network signal, and therefore a slower internet connection.

It may be important to test how your application behaves under such conditions where the internet connection is slow (2G) or goes offline intermittently.

The CDP command to fake a network connection is Network.emulateNetworkConditions. Information on the required and optional parameters for this command can be found in the documentation.

With access to Chrome DevTools, it becomes possible to simulate these scenarios. Lets see how.

View the code on Gist.

On line 21, we get the DevTools object by calling getDevTools(). Then we invoke the send() method to enable the Network, and then call send() again to pass in the built-in command Network.emulateNetworkConditions() and the parameters we’d like to send with this command.

Then finally, we open the Google homepage with the network conditions simulated.

Mocking Geolocation

Testing the location-based functionality of applications such as different offers, currencies, taxation rules, freight charges and date/time format for various geolocations is difficult because setting up the infrastructure for all of these physical geolocations is often not a cost-effective solution.

With mocking the geolocation, we could cover all the aforementioned scenarios and more.

The CDP command to fake a geolocation is Emulation.setGeolocationOverride. Information on the required and optional parameters for this command can be found in the documentation.

How can we achieve this with Selenium? Let’s walk through the sample code.

View the code on Gist.

After obtaining a DevTools object, we can use its send() method to invoke the Emulation.setGeolocationOverride command, sending the latitude, longitude, and accuracy.

After overriding the geolocation, we open mycurrentlocation.net and see that North Carolina is the detected geolocation.

pasted image 0 9

Capture HTTP Requests

With DevTools we can capture the HTTP requests the application is invoking and access the method, data, headers and lot more.

Lets see how to capture the HTTP requests, the URI and the request method with the sample code.

View the code on Gist.

The CDP command to start capturing the network traffic is Network.enable. Information on the required and optional parameters for this command can be found in the documentation.

Within our code on line 22, we use the DevTools::send() method to send the Network.enable CDP command to enable capturing network traffic.

On line 23, a listener is added to listen to all the requests made by the application. For each request captured by the application we then extract the URL with getRequest().getUrl() and the HTTP Method with getRequest().getMethod().

On line 29, we open Google’s homepage and on the console the URI and HTTP methods are printed for all the requests made by this page.

Once we are done capturing the requests, we can send the CDP command Network.disable to stop capturing the network traffic as shown on line 30.

Access Console logs

We all rely on logs for debugging and analysing the failures. While testing and working on an application with specific data or specific conditions, logs help us in debugging and capturing the error messages, giving more insights that are published in the Console tab of the Chrome DevTools.

We can capture the console logs through our Selenium scripts by calling the CDP Log commands as demonstrated below.

View the code on Gist.

Within our code on line 19, we use DevTools::send() to enable the console log capturing.

Then, we add a listener to capture all the console logs logged by the application. For each log captured by the application we then extract the log text with getText() and log level with getLevel() methods.

Finally, the application is opened and the console error logs published by the application are captured.

Capturing Performance Metrics

In today’s fast world while we are iteratively building software at such a fast pace, we should aim to detect performance bottlenecks iteratively too. Poor performing websites and slower loading pages make unhappy customers.

Can we validate these metrics along with our functional regression on every build? Yes, we can!

The CDP command to capture performance metrics is Performance.enable. Information for this command can be found in the documentation.

Lets see how it’s done with Selenium 4 and Chrome DevTools APIs.

View the code on Gist.

First, we create a session by calling the createSession() method from DevTools as on line 19.

Next, we enable DevTools to capture the performance metrics by sending the Performance.enable() command to send() as shown on line 20.

Once Performance capturing is enabled, we can open the application then send the Performance.getMetrics() command to send(). This will return a list of Metric objects which we then stream to get all of the names of the metrics captured as on line 25.

We then disable capturing the Performance on line 29 by sending the Performance.disable() command to send().

To see the metrics that we are interested in, we define a List called metricsToCheck then loop through this to print the metrics’ values.

Basic Authentication

Interacting with browser popups is not supported in Selenium, as it is only able to engage with DOM elements. This poses a challenge for pop-ups such as authentication dialogs.

We can bypass this by using the CDP APIs to handle the authentication directly with DevTools. The CDP command to set additional headers for the requests is Network.setExtraHTTPHeaders.

 Here’s how to invoke this command in Selenium 4.

View the code on Gist.

We begin by using the DevTools object to create a session and enable Network. This is demonstrated on lines 25-26.

Next, we open our website and then create the authentication header to send.

On line 35, we send the setExtraHTTPHeaders command to send() along with the data for the header. This is the part that will authenticate us and allow us to bypass the browser popup.

To test this out, we then click the Basic Authentication test link. If you try this manually, you’ll see the browser popup asking you to login. But since we sent the authentication header, we will not get this popup in our script.

Instead, we get the message “Your browser made it!”.

Summary

As you can see, Selenium has become a lot more powerful with the addition of the CDP APIs. We can now enhance our tests to capture HTTP network traffic, collect performance metrics, handle authentication, and emulate geolocations, time zones and device modes. As well as anything else that is possible within Chrome DevTools!

In the next post of this series, we will explore more new Selenium 4 features such as Observability and enhanced exceptions along with the brand new Selenium Grid 4.

Quick Answers

How can I use the CDP API to simulate different device modes in Selenium 4?

With the Emulation.setDeviceMetricsOverride command, you can set specific screen dimensions, mobile views, and other parameters to test responsive designs. This enables you to see how your application performs on different devices directly within your tests.

What’s the benefit of simulating network speeds in testing?

Simulating network speeds, such as slow 2G or intermittent connectivity, helps identify how an application behaves under various real-world conditions. This is especially valuable for mobile users who might experience inconsistent network connections.

How do I capture HTTP requests in Selenium 4 using CDP?

You can capture HTTP requests by enabling the network with Network.enable and adding a listener to log requests. This allows you to monitor request URLs, methods, and other details, which can help troubleshoot issues related to API calls or network performance.

Why would I use geolocation mocking in Selenium testing?

Geolocation mocking is useful for testing location-based functionality, like currency, offers, or other regional settings, without physically being in each location. This is achievable with the Emulation.setGeolocationOverride command in the CDP API.

What are some practical use cases of the CDP APIs in Selenium 4?

The CDP APIs can be used to simulate device modes, capture network traffic, monitor console logs, manage basic authentication popups, and measure performance metrics. These capabilities help create realistic testing scenarios and improve the reliability of your automated tests.

The post Selenium 4: Chrome DevTools Protocol [What’s New] appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/selenium-4-chrome-devtools/feed/ 0
9 Reasons to Attend the 2023 Test Automation University Conference https://app14743.cloudwayssites.com/blog/9-reasons-to-attend-the-2023-test-automation-university-conference/ Tue, 14 Feb 2023 16:31:45 +0000 https://app14743.cloudwayssites.com/?p=47238 That’s right, Test Automation University (TAU) is holding a conference! It’ll be a two-day virtual event from March 8–9, 2023 hosted by Applitools. We will sharpen our software quality skills...

The post 9 Reasons to Attend the 2023 Test Automation University Conference appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

That’s right, Test Automation University (TAU) is holding a conference! It’ll be a two-day virtual event from March 8–9, 2023 hosted by Applitools. We will sharpen our software quality skills as we go all-in for the testing community. This article gives you nine reasons why you should attend this awesome event.

#1: Find out what’s new with TAU

TAU is alive and kicking, and we’ve got some incredible new developments in the works. In my opening keynote address, I’ll dish out all the details for upcoming courses, community engagements, and learning paths. There might even be a few surprises!

#2: Sharpen your skills with tutorials

Our first day will be stacked with six two-and-a-half-hour tutorials so you can really hone your automation skills. Want to stack up Cypress and Playwright? Filip Hric and I have you covered. Need to drop down to the API layer? Julia Pottinger’s got the session for you. Unit testing? Tariq King will help you think inside the box. Wanna crank it up with load and performance testing? Leandro will show you how to use k6. How about some modern Selenium with visual testing? Anand Bagmar’s our man. It’s the perfect time to sharpen your skills.

#3: Hear inspiring talks from our instructors

There’s no doubt that our TAU instructors are among the best in software testing. We invited many of them to share their knowledge and their wisdom on our second day. Our program is designed to inspire you to step up in your careers.

#4: Test your testing skills

Want to challenge your testing skills together with your other TAU classmates? Well, Carlos Kidman will be offering a live “testing test,” where everyone gets to submit answers in real time. The student with the highest score will win a pretty cool prize, so be sure to study up!

#5: Algorave at the halftime show

Have you ever listened to algorithmically-generated music that is coded live in front of you? It’s an experience! Dan Gorelick will kick out the algorave jams during our halftime show. (Glow sticks not included.)

#6: Cram even more content with lightning talks

Lightning talks bring bite-sized ideas to the front stage in rapid succession. They’re a welcome change of pace from full-length talks. We’ll have a full set of lightning talks from a few of our instructors along with a joint Q&A where you can ask them anything!

#7: Compete in extracurricular games

Every university needs some good extracurricular activities, right? We’ll have a mid-day game where you, as members of the audience, will get to participate! We’re keeping details under wraps for now so that nobody gets an unfair advantage.

#8: Win awards like a homecoming champion

What would a celebration be without superlative awards? We will recognize some of our best instructors and students in our closing ceremony. You might also win some prizes based on your participation in the conference!

#9: Get your conference tickets for FREE!

Applitools is hosting the 2023 TAU Conference as a free community event for everyone to attend. Just like our TAU courses, you don’t need to pay a dime to participate. You don’t even need to be a TAU student! Come join us for the biggest block party in the testing community this year. Register here to reserve your spot.

The post 9 Reasons to Attend the 2023 Test Automation University Conference appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Using TypeScript for Test Automation https://app14743.cloudwayssites.com/blog/typescript-is-not-only-for-developers-anymore/ Wed, 18 Jan 2023 21:25:08 +0000 https://app14743.cloudwayssites.com/?p=46025 TypeScript is not only for developers anymore. If you are working as a tester in a web development team, chances are that you have heard about TypeScript. It’s been getting...

The post Using TypeScript for Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
TypeScript logo

TypeScript is not only for developers anymore. If you are working as a tester in a web development team, chances are that you have heard about TypeScript. It’s been getting more and more attention over the past couple of years and has even surpassed JavaScript as the dominant language in a recent survey on state of software delivery made by CircleCI.

In my very un-scientific poll on LinkedIn, I asked fellow web application testers about the programming language of their choice. It seems that JavaScript is the most popular choice, winning over TypeScript and Java. But, in my opinion, testers should pay attention to the rising popularity of TypeScript and ideally start using it. In this article, I would like to take a closer look at many of TypeScript’s benefits for testing and automation.

What is TypeScript?

TypeScript is a programming language that is a superset of JavaScript. It adds many extra capabilities to JavaScript, improving the overall developer experience. As Basarat Ali Syed aptly puts it, TypeScript gives you the ability to use future JavaScript today. Besides that, TypeScript adds a type system into JavaScript, helping you write more stable and maintainable code. Let me give you an example of what that means.

Look at this plain JavaScript function:

const addition = (a, b) => {
 return a + b
}

This function takes two parameters and adds them up. It’s helpful if we need to sum two numbers. But what if we use this function in a way it was not intended? Or worse – what if we misunderstood how this function works?

addition(1, 2) // returns 3
addition('1', '2') // returns '12'

In the example above, our function will yield different results based on the type of input we provide it with. On the first line, we are passing two numbers, and our function will correctly add them up. On the second line, we are passing two strings, and since the input values are wrapped in quotation marks, the function will concatenate them instead of adding the numbers.

The function works as designed, but even if that’s the case, we may get into unexpected results. After all, this is why software testing is a thing.

But as developers work on more complex projects, on more complex problems, and with more complex data, risks increase. This is where TypeScript can become very helpful, since it can specify what kind of input the function expects. Let’s see how a similar function definition and function call would look like in TypeScript:

const addition = (a: number, b: number) => {
 return a + b
}


addition(1, 2) // returns 3
addition('1', '2') // shows an error

In the function definition, we specify the types for the parameters that this function expects. If we try to use our add() function incorrectly, the TypeScript compiler will complain about this and throw an error. So what is TypeScript compiler, you ask?

How TypeScript works

TypeScript cannot be read by the browser. In order to run TypeScript code, it needs to be compiled. In other words, everything you create using TypeScript will be converted to JavaScript at some point.

To run the compiler, we can open the terminal and run the following command:

tsc addition.ts

This command will point TypeScript compiler (tsc) to our addition.ts file. It will create a JavaScript file alongside the original TypeScript file. If there are any “type errors” in the file, we’ll get an error into our terminal.

But you don’t have to do this manually every time. There’s a good chance your code editor has a TypeScript compiler running in the background as you type your code. With VS Code, this functionality comes out of the box, which makes sense since both TypeScript and VS Code are developed and maintained by Microsoft. Having the compiler running in the background is incredibly useful, mostly because this allows us to immediately see errors such as the one shown in the last example. Whenever there is such an error, we get feedback and an explanation:

In this case, we are passing a string into a function that requires us to pass numbers. This information comes from a compiler that runs inside the editor (VS code in my case).

Additionally, some modern testing tools such as Playwright and Cypress run the compiler on the fly and convert your TypeScript code into browser-readable JavaScript for you.

How to use TypeScript as a tester

Now that you know what TypeScript is and how it works, it is time to answer the most important question: why? Is TypeScript even useful for someone who focuses on test automation?

My answer is yes, and I would like to demonstrate this in a few examples. I’ll also give you a couple of reasons why I think test automation engineers should start getting familiar with TypeScript.

Typed libraries

As test automation engineers, we often implement many different libraries for our work. There’s no tool that fits all the needs, and many times we deal with plugins, integrations, toolsets, extensions, and libraries. When you start, you need to get yourself familiar with the API of that tool, dig into the documentation, try it out, and understand different commands. It’s a process. And it can be exhausting to get through that first mile.

TypeScript can be helpful in speeding up that process. Libraries that contain type definitions can really get you up to speed with using them. When a library or a plugin contains type definitions, it means that functions in that library will have the same type of checking implemented as in the example I have given earlier.

This means that whenever you e.g. pass a wrong argument to a function, you will see an error in your editor. In the following example, I am passing a number into a cy.type() function that will only accept text:

Code autocompletion

Besides checking for correct arguments, TypeScript can help with giving autocomplete suggestions. This can act as a quick search tool, when you are looking for the right command or argument.

I have recently made a plugin for testing API with Cypress, and TypeScript autocompletion helps with passing different attributes such as url, method, or request body:

Handling imported data

Working with data can often get complicated. Especially when working with complex datasets in which you have to navigate through multiple levels of data structure. One mistake can make the whole test fail and cause a headache when trying to debug that problem.

When data is imported to a test, for example from a JSON file, the structure of that JSON file is imported to the test as well. This is something that the TypeScript compiler inside the editor does for us. Let’s take a look at a simple JSON file that will seed data into our test:

While creating our test, the editor will guide us through the fixture file and suggest possible keys that can be inferred from that file:

Notice how we not only get the key names, but also the type of the key. We can use this type inference for both seeding the data into our tests, as well as making assertions. The best part about this? The fixture file and test are now interconnected. Whenever we change our fixture file, the test file will be affected as well. If e.g. we decide to change the name property in our JSON file to title, TypeScript compiler will notice this error even before we decide to run our test.

Tightening source code with test code

Probably my strongest argument for using TypeScript is the connection between test code and source code. Being able to tie things together is a game changer. Whenever the source code changes, it can have a direct effect on tests, and with TypeScript, there’s a high chance it will show even before you run your tests on a pipeline.

Let’s say you have an API test. If your developers use TypeScript, chances are they have a TypeScript definition of the API structure. As a tester, you can import that structure into your test and use it e.g. for testing the API response:

import Board from "trelloapp/src/typings/board";


it('Returns proper response when creating new board', () => {
 cy.request<Board>('POST', '/api/boards', { name })
   .then(({ body }) => {
     expect(body.name).to.eq(name)
     expect(body.id).to.exist
   })
 })

Similarly to the previous example with fixture files, whenever something changes in our typings file, we will notice the change in the test file as well.

Checking errors on CLI

A really powerful feature of TypeScript is the ability to check all errors in the project. We can do this by typing following command in the terminal:

tsc --noEmit

The –noEmit flag means that the TypeScript compiler will not create JavaScript files, but it will check for any errors on our files. We can check all our files in the project, which means that even if we have worked on a single file, all the files will be checked for errors.

We can check the health of our tests even before they are run. Whenever we change files in our project, we can check if type checks are still passing. Even without opening any files.

This is sometimes referred to as “static testing”. It enables us to add an additional layer of checks that will help us make less mistakes in our code.

A great advantage of this is that it runs super fast, making it a good candidate for a pre-commit check.

In fact, setting up such a check is very easy and can be done in three simple steps:

First, install pre-commit package via npm using following command:

npm install pre-commit --save-dev

Then, create a lint script in package.json:

"scripts": {
 "lint": "tsc --noEmit"
}

As a final step, define lint as one of the pre-commit checks in package.json:

"pre-commit": [ "lint" ]

From now on, whenever we try to commit, our tsc –noEmit script will run. If it throws any errors, we will not be able to commit staged files.

Conclusion

TypeScript offers a variety of advantages. Static typing can improve the reliability and maintainability of the test code. It can help catch errors and issues earlier in the development process, saving time and effort spent on debugging. And there are many more that didn’t make it to this post.

Since TypeScript is built on top of JavaScript, test automation engineers who are familiar with JavaScript will be able to easily pick up TypeScript. The knowledge and skills they have developed in JavaScript will transfer over. If you need the initial push, you can check out my new course on TypeScript in Cypress, where I explain the basics of TypeScript within Cypress, but most of the knowledge can be transferred to other tools as well. The best part? It’s absolutely free on Test Automation University.

Quick Answers

What is TypeScript, and why is it useful for test automation?

TypeScript is a superset of JavaScript that adds static typing, improving code reliability and maintainability. For test automation, it helps catch errors early, making tests more stable and less prone to bugs.

How does TypeScript improve code reliability in testing?

TypeScript enforces type safety, ensuring input and output types are consistent. This reduces runtime errors and makes automated tests more predictable and easier to debug.

Can TypeScript be used with tools like Cypress?

Yes, TypeScript integrates well with tools like Cypress. Many modern testing tools convert TypeScript into JavaScript during the testing process, allowing seamless compatibility.

Can TypeScript be used with Playwright?

Yes, Playwright supports TypeScript out of the box, transforming it into JavaScript during test execution. It’s recommended to run the TypeScript compiler alongside Playwright for type checking and error detection.

How does TypeScript help with using libraries and plugins in testing?

TypeScript provides type definitions for libraries and plugins, which ensures correct usage. It also offers code autocompletion, making it easier to find and use available functions.

What are the benefits of TypeScript for handling test data?

TypeScript can infer the structure and types of imported data, such as JSON files. This helps testers avoid mistakes in data handling and ensures tests remain synchronized with data changes.

Is switching to TypeScript challenging for testers familiar with JavaScript?

No, since TypeScript builds on JavaScript, testers can transfer their existing skills. The added features of TypeScript are easy to learn and bring significant benefits to testing workflows.

How can TypeScript be used to check for errors before running tests?

The TypeScript compiler can perform static code checks, identifying errors across files before tests are executed. This saves time and minimizes debugging during the actual testing phase.

The post Using TypeScript for Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
The Top 10 Test Automation University Courses https://app14743.cloudwayssites.com/blog/the-top-10-test-automation-university-courses/ Thu, 10 Nov 2022 16:34:00 +0000 https://app14743.cloudwayssites.com/?p=44406 Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and...

The post The Top 10 Test Automation University Courses appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Test Automation University, powered by Applitools

Test Automation University (also called “TAU”) is one of the best online platforms for learning testing and automation skills. TAU offers dozens of courses from the world’s leading instructors, and everything is available for free. The platform is proudly powered by Applitools. As of November 2022, nearly 140,000 students have signed up! TAU has become an invaluable part of the testing community at large. Personally, I know many software teams who use TAU courses as part of their internal onboarding and mentorship programs.

So, which TAU courses are currently the most popular? In this list, we’ll count down the top 10 most popular courses, ranked by the total number of course completions over the past year. Let’s go!

#10: Selenium WebDriver with Java

Selenium WebDriver with Java course badge
Angie Jones

Starting off the list at #10 is Selenium WebDriver with Java by none other than Angie Jones. Even with the rise of alternatives like Cypress and Playwright, Selenium WebDriver continues to be one of the most popular tools for browser automation, and Java continues to be one of its most popular programming languages. Selenium WebDriver with Java could almost be considered the “default” choice for Web UI test automation.

In this course, Angie digs deep into the WebDriver API, teaching everything from the basics to advanced techniques. It’s a great course for building a firm foundation in automation with Selenium WebDriver.

#9: Python Programming

Python Programming course badge
Jess Ingrassellino

#9 on our list is one of our programming courses: Python Programming by Jess Ingrassellino. Python is hot right now. On whatever ranking, index, or article you find these days for the “most popular programming languages,” Python is right at the top of the list – often vying for the top spot with JavaScript. Python is also quite a popular language for test automation, with excellent frameworks like pytest, libraries like requests, and bindings for browser automation tools like Selenium WebDriver and Playwright.

In this course, Dr. Jess teaches programming in Python. This isn’t a test automation course – it’s a coding course that anyone could take. She covers both structured programming and object-oriented principles from the ground up. After two hours, you’ll be ready to start coding your own projects!

#8: API Test Automation with Postman

API Test Automation with Postman course badge
Beth Marshall

The #8 spot belongs to API Test Automation with Postman by Beth Marshall. In recent years, Postman has become the go-to tool for building and testing APIs. You could almost think of it as an IDE for APIs. Many test teams use Postman to automate their API test suites.

Beth walks through everything you need to know about automating API tests with Postman in this course. She covers basic features, mocks, monitors, workspaces, and more. Definitely take this course if you want to take your API testing skills to the next level!

#7: Introduction to Cypress

Intro to Cypress course badge
Gil Tayar

Lucky #7 is Introduction to Cypress by Gil Tayar. Cypress is one of the most popular web testing frameworks these days, even rivaling Selenium WebDriver. With its concise syntax, rich debugging features, and JavaScript-native approach, it’s become the darling end-to-end test framework for frontend developers.

It’s no surprise that Gil’s Cypress course would be in the top ten. In this course, Gil teaches how to set up and run tests in Cypress from scratch. He covers both the Cypress app and the CLI, and he even covers how to do visual testing with Cypress.

#6: Exploring Service APIs through Test Automation

Exploring Services APIs through Test Automation course badge
Amber Race

The sixth most popular TAU course is Exploring Service APIs through Test Automation by Amber Race. API testing is just as important as UI testing, and this course is a great way to start learning what it’s all about. In fact, this is a great course to take before API Test Automation with Postman.

This course was actually the second course we launched on TAU. It’s almost as old as TAU itself! In it, Amber shows how to explore APIs first and then test them using the POISED strategy.

#5: IntelliJ for Test Automation Engineers

IntelliJ for Test Automation Engineers course badge
Corina Pip

Coming in at #5 is IntelliJ for Test Automation Engineers by Corina Pip. Java is one of the most popular languages for test automation, and IntelliJ is arguably the best and most popular Java IDE on the market today. Whether you build frontend apps, backend services, or test automation, you need proper development tools to get the job done.

Corina is a Java pro. In this course, she teaches how to maximize the value you get out of IntelliJ – and specifically for test automation. She walks through all those complicated menus and options you may have ignored otherwise to help you become a highly efficient engineer.

#4: Java Programming

Java Programming course badge
Angie Jones

Our list is winding down! At #4, we have Java Programming by Angie Jones. For the third time, a Java-based course appears on this list. That’s no surprise, as we’ve said before that Java remains a dominant programming language for test automation.

Like the Python Programming course at spot #9, Angie’s course is a programming course: it teaches the fundamentals of the Java language. Angie covers everything from “Hello World” to exceptions, polymorphism, and the Collections Framework. Clocking in at just under six hours, this is also one of the most comprehensive courses in the TAU catalog. Angie is also an official Java Champion, so you know this course is top-notch.

#3: Introduction to JavaScript

Introduction to Cypress course badge
Mark Thompson

It’s time for the top three! The bronze medal goes to Introduction to JavaScript by Mark Thompson. JavaScript is the language of the Web, so it should be no surprise that it is also a top language for test automation. Popular test frameworks like Cypress, Playwright, and Jest all use JavaScript.

This is the third programming course TAU offers, and also the top one in this ranking! In this course, Mark provides a very accessible onramp to start programming in JavaScript. He covers the rock-solid basics: variables, conditionals, loops, functions, and classes. These concepts apply to all other programming languages, too, so it’s a great course for anyone who is new to coding.

#2: Web Element Locator Strategies

Web Element Locator Strategies course badge
Andrew Knight

I’m partial to the course in second place – Web Element Locator Strategies by me, Andrew Knight! This was the first course I developed for TAU, long before I ever joined Applitools.

In whatever test framework or language you use for UI-based test automation, you need to use locators to find elements on the page. Locators can use IDs, CSS selectors, or XPaths to uniquely identify elements. This course teaches all the tips and tricks to write locators for any page, including the tricky stuff!

#1: Setting a Foundation for Successful Test Automation

Setting a Foundation for Successful Test Automation course badge
Angie Jones

It should come as no surprise that the #1 course on TAU in terms of course completions is Setting a Foundation for Successful Test Automation by Angie Jones. This course was the very first course published to TAU, and it is the first course in almost all the Learning Paths.

Before starting any test automation project, you must set clear goals with a robust strategy that meets your business objectives. Testing strategies must be comprehensive – they include culture, tooling, scaling, and longevity. While test tools and frameworks will come and go, common-sense planning will always be needed. Angie’s course is a timeless classic for teams striving for success with test automation.

What can we learn from these trends?

A few things are apparent from this list of the most popular TAU courses:

  1. Test automation is clearly software development. All three of TAU’s programming language courses – Java, JavaScript, and Python – are in the top ten for course completions. A course on using IntelliJ, a Java IDE, also made the top ten. They prove how vital good development skills are needed for successful test automation.
  2. API testing is just as important as UI testing. Two of the courses in the top ten focused on API testing.
  3. Principles are more important than tools or frameworks. Courses on strategy, technique, and programming rank higher than courses on specific tools and frameworks.

What other courses are popular?

The post The Top 10 Test Automation University Courses appeared first on AI-Powered End-to-End Testing | Applitools.

]]>