Business Leaders Archives - AI-Powered End-to-End Testing | Applitools https://app14743.cloudwayssites.com/blog/tag/business-leaders/ Applitools delivers full end-to-end test automation with AI infused at every step. Wed, 11 Mar 2026 18:57:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.8 Buyer’s Checklist for Autonomous Testing in Regulated Environments https://app14743.cloudwayssites.com/blog/buyers-checklist-autonomous-testing-regulated-industries/ Mon, 17 Nov 2025 20:45:00 +0000 https://app14743.cloudwayssites.com/?p=61646 Regulated teams are adopting autonomous testing, but only with the right guardrails. This checklist outlines the core capabilities, governance features, and risk-based controls to look for when evaluating AI-driven testing platforms.

The post Buyer’s Checklist for Autonomous Testing in Regulated Environments appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
buyer checklist for autonomous testing regulated environment

TL;DR

• Autonomous testing is maturing quickly, but regulated organizations must evaluate platforms through the lens of traceability, auditability, and control.
• Forrester’s Autonomous Testing Platforms Landscape, Q3 2025 shows that the real differentiators now are explainability, risk-based orchestration, and AI governance—not just automation speed.
• Use this checklist to choose a platform that accelerates delivery while protecting oversight.

Download Forrester’s full report for detailed market insights

Rethinking Autonomy for Regulated Teams

With hundreds of tools now promising “AI-driven automation,” sorting true autonomy from clever scripting has become increasingly difficult. This matters even more for regulated teams planning their 2026 quality strategy. Speed is no longer the only concern. Proof, traceability, and controlled execution are now essential.

Forrester’s recent analysis highlights a market shifting from test automation to AI-augmented and agentic systems that generate, maintain, and execute tests under human supervision. The key question for regulated buyers is not whether autonomy will help, but whether the platform provides clear governance around how that autonomy operates.

Use this checklist to evaluate solutions with the guardrails required for safety-critical or compliance-heavy environments.

Core Capabilities Every Autonomous Testing Platform Should Provide

These capabilities form the baseline for operating safely and efficiently in regulated sectors.

Plain-language test authoring and execution
Non-technical reviewers should contribute without adding risk. Natural-language authoring and guardrails make collaboration safe and auditable.

Transparent AI actions
Every generated or changed step must be reviewable. No black-box maintenance. No silent updates.

Evidence management and auditability
Exportable logs, change histories, and evidence packs should support internal and external audits without manual rework.

Role-based control and gated approvals
Automation should accelerate work, but never bypass required compliance workflows.

Adaptive, governed maintenance
Self-healing is useful only when changes are traceable and reversible. Regulated teams need adaptive maintenance under human oversight.

If a platform lacks any of these essentials, it’s not built for environments where documentation and control are mandatory.

Where Advanced Platforms Differentiate

Once the fundamentals are covered, regulated organizations should look at the capabilities that separate mature autonomous solutions from those still catching up.

Intent-based visual and experience validation
Pixel comparison is brittle. Intent-driven validation ensures the interface appears correct, accessible, and compliant across devices and browsers.

Governance dashboards
AI actions, risk coverage, and test triggers should be visible and easy to trace for auditors and managers.

Actionable analytics and reporting
Evidence should turn into insights that support risk management, release approvals, and executive reporting.

Risk-based orchestration
Platforms should prioritize tests based on business criticality, change impact, and historical issues—not just run everything in bulk.

Applying Autonomous Testing in Regulated Workflows

Organizations across healthcare, life sciences, financial services, and other regulated industries are already adopting autonomous testing—but always with governance in place.

In the pharmaceutical sector, EVERSANA INTOUCH takes a hybrid approach, combining Applitools Eyes for Visual AI validation with Applitools Autonomous for intelligent test generation. This end-to-end strategy ensures quality products, supports compliance-ready evidence, reduces maintenance, and provides end-to-end coverage across complex workflows—all while keeping human reviewers in charge. Read the EVERSANA INTOUCH case study.

These hybrid models show how autonomy can increase coverage and speed without loosening control.

Applying the Checklist to Your Evaluation Process

Use this framework when comparing platforms side by side:

  • Map your highest-risk business journeys. Focus on areas tied to compliance, customer safety, or financial impact.
  • Prioritize transparency. Ensure the platform shows why AI takes each action and allows review before changes go live.
  • Assess evidence and governance. Exportable results, audit-ready logs, and approval gates are non-negotiable.
  • Evaluate adaptability. Autonomous maintenance should reduce manual effort but still operate inside defined boundaries.
  • Reassess regularly. The market is moving fast. Capabilities that seem advanced today will become baseline expectations.

Choosing with Confidence

Autonomous testing is reaching maturity, but regulated organizations need more than speed—they need governance, visibility, and trust. Forrester’s research confirms that platforms built with explainability and risk alignment at the center are the ones best suited for compliance-driven teams.

Use Forrester’s analysis and this checklist to guide your next evaluation and choose an autonomous testing solution that accelerates both delivery and confidence. Download the Autonomous Testing Platforms Landscape, Q3 2025 report.

Frequently Asked Questions

What is an autonomous testing solution?

An autonomous testing solution uses AI to create, execute, and maintain tests automatically—continuously improving speed, coverage, and reliability.

Are autonomous testing tools safe for regulated industries?

Yes, as long as the platform provides explainable AI actions, governed maintenance, exportable evidence logs, and strict access controls. These guardrails ensure autonomy operates within compliance requirements.

How does autonomous testing support audit readiness?

Modern platforms capture evidence automatically, record AI-driven changes, and produce exportable logs that simplify internal and external audits. This reduces manual documentation effort while increasing traceability.

Can autonomous testing replace human testers?

No—it complements them. By automating maintenance and execution, it frees QA and engineering teams to focus on strategy, risk, and user experience.

When is a team ready to invest in autonomous testing?

When test maintenance slows releases or expanding coverage requires more effort than resources allow. Teams with established CI/CD pipelines gain the most immediate benefit.

What should regulated organizations look for in autonomous testing tools?

Key capabilities include transparent AI actions, controlled authoring, audit-ready evidence, risk-based test prioritization, and dashboards that show why the AI took specific actions.

The post Buyer’s Checklist for Autonomous Testing in Regulated Environments appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
A New Chapter for Applitools: CEO Anand Sundaram on Why He Joined https://app14743.cloudwayssites.com/blog/anand-sundaram-joins-applitools-ceo/ Fri, 03 Oct 2025 14:00:00 +0000 https://app14743.cloudwayssites.com/?p=61352 Applitools CEO Anand Sundaram shares why he joined the company, what inspired him about its people and technology, and how Applitools is shaping the future of AI-driven software quality.

The post A New Chapter for Applitools: CEO Anand Sundaram on Why He Joined appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

By Anand Sundaram, CEO of Applitools

I’m thrilled to share that I’ve joined Applitools as CEO.

For over a decade, I’ve admired this company for pioneering Visual AI and transforming how teams think about software quality. Applitools Eyes didn’t just make testing faster—it created an entirely new category that fundamentally changed how organizations approach quality at scale. Today, with the Applitools Intelligent Testing Platform and our Autonomous product evolving rapidly, we’re once again reshaping what’s possible.

Why I Joined Applitools

What drew me here is the rare combination of groundbreaking technology and exceptional people. Applitools has built a platform that sits at the heart of modern software delivery, helping teams validate quality at every stage—from design to deployment—and enabling them to ship with confidence.

We’re at another major inflection point. AI is transforming how software is built, tested, and delivered—at unprecedented speed and scale. Code is being generated faster than ever by both humans and machines, and quality can’t become the bottleneck. Applitools is uniquely positioned to help organizations navigate this transformation, delivering applications and services with speed, confidence, and uncompromising quality.

Looking Ahead

Over the next few weeks, I’ll be focused on listening and learning. I want to hear from our employees, customers, and partners—what excites you about Applitools, where you see opportunities, and what bold ideas we should explore together. Your insights will shape our path forward.

This is an incredible moment for Applitools. Together, we’ll build on our momentum, deepen our impact with customers, and define the future of AI-driven software quality.

Thank you to everyone who has already extended such a warm welcome. I’m honored to lead this team and excited about what we’ll accomplish together.

— Anand

Anand Sundaram, Applitools CEO
Anand Sundaram

Chief Executive Officer, Applitools

Anand Sundaram is a seasoned product and technology executive with more than two decades of experience in software quality. He has held multiple senior leadership roles and founded three startups, including RSW Software, which was acquired by Teradyne and became the foundation of the Oracle Application Testing Suite. Read more.

The post A New Chapter for Applitools: CEO Anand Sundaram on Why He Joined appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Bridging the Gap: Why Businesses Thrive with Hybrid Test Automation https://app14743.cloudwayssites.com/blog/scale-faster-with-hybrid-test-automation/ Thu, 10 Apr 2025 10:33:00 +0000 https://app14743.cloudwayssites.com/?p=60001 Hybrid test automation—combining coded and no-code tools—is helping teams reduce maintenance, accelerate releases, and scale quality across skill levels. Learn how a balanced strategy leads to faster innovation, stronger collaboration, and smarter resource use.

The post Bridging the Gap: Why Businesses Thrive with Hybrid Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Boost revenue with a hybrid test automation strategy

In today’s hyper-competitive environment, efficiency is king. Ensuring quality without slowing down development cycles is a critical priority for organizations looking to stay ahead. Hybrid test automation—combining both coded and no-code tools—has emerged as a game-changer. The smartest organizations are adopting this approach to reduce maintenance, accelerate releases, and empower cross-functional teams.

Applitools customer, Eric Terry, Senior Director of Quality Control at EVERSANA INTOUCH, underscored how a hybrid approach to test automation bridges skill gaps, enhances collaboration, and accelerates time-to-market. This article explores why a dual automation strategy isn’t just an IT initiative—it’s a business imperative.

The Business Risks of Choosing Just One Approach

When organizations lean too heavily on either coded or no-code automation, inefficiencies emerge. Coded automation offers flexibility and customization but demands highly skilled engineers, creating bottlenecks. No-code automation empowers non-developers but may lack depth for complex scenarios.

A hybrid strategy aligns technical capabilities with business needs, ensuring that:

  • Routine tasks and UI-driven tests are handled by AI-powered no-code tools like Applitools Autonomous.
  • Complex scenarios requiring deep customization leverage coded automation.
  • Testing scales across diverse skill levels, unlocking greater efficiency.

Faster Releases, Higher Quality: A Competitive Advantage

Accelerating time-to-market while maintaining quality is a strategic advantage. Companies that integrate both coded and no-code automation realize efficiency gains, including:

  • Reduced test maintenance: “We cut test maintenance by 40% by integrating AI-driven no-code automation,” Eric shared.
  • Parallel execution: Running tests simultaneously across environments accelerates feedback loops.
  • Smarter test selection: AI-powered tools identify the most critical tests, reducing regression cycles by up to 70%.

Collaboration as a Business Driver

Siloed workflows kill efficiency. When manual testers, automation engineers, and developers operate in isolation, knowledge gaps and redundancies increase risk.

Successful hybrid test automation programs:

  • Encourage mentorship, where automation engineers guide manual testers.
  • Align automated testing efforts with broader business goals.
  • Leverage collaborative tools like Azure DevOps and Microsoft Teams for transparency.

Cost Savings: The Overlooked Benefit of Hybrid Automation

Cost efficiency isn’t just about reducing headcount; it’s about maximizing team output. Organizations that embrace a hybrid test automation approach realize:

  • Lower hiring costs by enabling manual testers to contribute to automation efforts.
  • Higher productivity by freeing developers from routine scripting.
  • Broader adoption as business teams leverage no-code tools for non-QA applications, such as UI validation.

“Anytime that you can save some time, it has the potential to that into revenue,” Eric emphasized.

The No-Code Mindset Shift: A Leadership Imperative

Historically, tech leaders viewed no-code solutions as limited. But AI-driven platforms like Applitools are changing the game, allowing teams to scale automation without specialized expertise.

“I think we’ll start to see the uptick,” Eric predicted. “Tools are getting better, and they’re making automation more accessible than ever.”

See first-hand how Applitools can help your teams bridge skill gaps and scale test automation with a free trial.

Next Steps: Implementing a Hybrid Approach in Your Organization

For leaders looking to integrate both coded and no-code automation, consider these steps:

  1. Assess your skill gaps – Identify where no-code solutions can bridge inefficiencies.
  2. Start small, then scale – Pilot no-code automation for repetitive workflows.
  3. Foster a whole-team quality mindset – Align teams around a shared automation vision.
  4. Leverage AI-powered tools – Reduce maintenance while increasing test accuracy.

Future-Proof Your Testing Strategy

In the words of W. Edwards Deming, “It is not necessary to change. Survival is not mandatory.” Organizations that resist automation evolution risk falling behind. By strategically integrating both coded and no-code automation, businesses position themselves for faster innovation, higher quality, and stronger collaboration.

Hear more of EVERSANA’s story by watching Code or No-Code Tests? Why Top Teams Choose Both.

FAQ: Hybrid test automation—combining coded and no-code tools

How does combining coded and no-code test automation improve business outcomes?

A hybrid test automation strategy reduces bottlenecks, lowers test maintenance, and empowers broader teams to contribute—resulting in faster releases, better product quality, and more efficient use of technical talent.

What are the risks of using only coded or only no-code automation?

Relying solely on one approach can limit scalability and increase costs. Coded automation lacks accessibility for non-developers, while no-code alone may fall short in complex testing scenarios. A blended strategy mitigates both risks.

How can no-code test automation support digital transformation initiatives?

No-code tools allow business and QA teams to automate repetitive tasks without needing engineering support, freeing up developers for high-impact work and accelerating software delivery cycles.

What’s the ROI of a hybrid test automation strategy?

Teams report significant time and cost savings—up to 40% less test maintenance and faster onboarding of non-technical contributors—making hybrid automation a high-ROI initiative for IT and business leaders alike.

How do we start implementing a hybrid automation strategy?

Begin with a skill gap analysis. Use no-code tools like Applitools Autonomous for fast wins, then layer in coded automation where deeper customization is needed. Align automation goals with business KPIs to ensure cross-team adoption.

The post Bridging the Gap: Why Businesses Thrive with Hybrid Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
The Business Value of AI-Powered Testing: Maximizing ROI https://app14743.cloudwayssites.com/blog/tbusiness-value-of-ai-powered-testing-maximizing-roi/ Mon, 10 Mar 2025 19:35:30 +0000 https://app14743.cloudwayssites.com/?p=59890 AI-powered testing delivers real business value by reducing costs, lowering risk, and accelerating software releases. Learn how it maximizes ROI with automation, self-healing tests, and better defect detection. Explore key insights and real-world benefits.

The post The Business Value of AI-Powered Testing: Maximizing ROI appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

In today’s fast-paced software landscape, teams must balance speed, quality, and cost—a challenge that traditional test automation often fails to meet. Testing bottlenecks slow down releases, defects slip through to production, and maintenance costs spiral out of control.

This is where AI-powered testing delivers significant business value. By automating test creation, execution, and maintenance, AI helps teams reduce costs, lower risk, and increase software reliability—leading to a clear return on investment (ROI). Let’s explore how AI-driven testing transforms software teams and drives measurable business outcomes.

The Growing Challenge of Software Testing

Modern applications introduce significant testing challenges:

  • More Code, More Problems – AI-assisted coding tools generate more code, requiring robust testing to keep pace.
  • Expanding Device & Browser Matrix – Users expect seamless experiences across devices, browsers, and screen sizes.
  • Limited Testing Resources – Teams often lack the bandwidth to maintain comprehensive test coverage manually.

These realities create a gap between what teams should test and what they can test. AI testing solutions close this gap by increasing coverage, reducing human intervention, and making automated tests more resilient.

The ROI of AI-Powered Testing

Companies that implement AI-powered testing see improvements across four key areas:

1. Faster Release Cycles = Accelerated Time to Market

Traditional testing slows down software development, with teams often spending 30% or more of their time debugging and fixing defects. AI accelerates release cycles by:

  • Automating test creation and execution
  • Reducing manual intervention with self-healing test scripts
  • Eliminating maintenance headaches caused by UI changes

2. Fewer Production Defects = Lower Business Risk

Bugs in production can lead to revenue loss, reputational damage, and compliance risks. AI-powered testing reduces defect leakage by:

  • Catching more UI and functional issues with Visual AI
  • Reducing false positives and negatives in test execution
  • Identifying risks earlier in the development cycle

Try Applitools Autonomous for free and see how AI-driven testing enhances defect detection. Sign Up Now.

3. Reduced Testing Costs = More Efficient Resource Allocation

Hiring, training, and maintaining a robust QA team is costly. AI-powered testing optimizes costs by:

  • Reducing test maintenance efforts by up to 40%
  • Allowing non-technical team members to contribute to testing
  • Increasing test coverage without requiring more human effort

4. Higher-Quality Software = Increased Customer Satisfaction & Revenue

Customers expect flawless digital experiences. AI-powered testing ensures:

  • Fewer production issues that impact user satisfaction
  • Smoother cross-device and cross-browser experiences
  • Increased trust and retention from end users

A better user experience translates to higher customer retention, fewer support tickets, and increased revenue—a direct boost to the bottom line.

Calculating ROI: What’s the Business Impact?

Organizations that implement AI-powered testing can save millions annually by reducing test maintenance, accelerating releases, and minimizing costly defects. With the right tools, teams can quantify:

  • Time savings in test creation, execution, and maintenance
  • Reduction in defect-related costs (fixing bugs post-release is 30 times more expensive than catching them early)
  • Operational efficiency—allowing teams to focus on innovation instead of repetitive testing tasks

Want to calculate your team’s ROI with AI-powered testing? Talk to our experts and see the impact on your bottom line.

The Future of Testing is AI-Driven

AI-powered testing isn’t just a technical advantage—it’s a business imperative. By improving efficiency, reducing risk, and lowering costs, AI helps teams deliver high-quality software faster while maximizing ROI.

Missed the full discussion? Watch the complete webinar replay for a deeper dive into the ROI of AI-driven testing. Watch now.

The post The Business Value of AI-Powered Testing: Maximizing ROI appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Continuous visual regression testing to enable regulatory compliance in the healthcare sector https://app14743.cloudwayssites.com/blog/visual-regression-regulatory-compliance/ https://app14743.cloudwayssites.com/blog/visual-regression-regulatory-compliance/#respond Tue, 07 Jul 2020 05:23:59 +0000 https://app14743.cloudwayssites.com/?p=20014 The fourth industrial revolution – the digital revolution – has strong requirements for companies operating under strict business regulations. Particularly, in the healthcare sector, companies must spend great efforts to...

The post Continuous visual regression testing to enable regulatory compliance in the healthcare sector appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

The fourth industrial revolution – the digital revolution – has strong requirements for companies operating under strict business regulations.

Particularly, in the healthcare sector, companies must spend great efforts to survive “digital Darwinism”. The healthcare market is highly competitive and strongly regulated at the same time. Healthcare, pharmaceutical, and medical device companies invent new medicines and other products in a highly volatile business landscape. On one hand, they have to act as agile as possible, considering time-to-market delivery, on the other hand, they face strict compliance regulations, like FDA, HIPAA, etc.

The question is how healthcare companies can deliver new products and services at high speed while meeting their regulatory compliance obligations?

In this article, you will find the answer based on an example of the FDA-Requirement “Back-Box-Warning”.

Picture 1

The FDA (Food and Drug Administration) prescribes warnings and precautions on the package insert for certain prescription drugs. As these warnings are framed in a “Black Box” to catch the eye of the reader, they are also referred to as “Black Box Warnings” and can be found at the beginning of the package insert (see Picture 1) or in the drug description in the online store (see Picture 2).

Picture 2

If the FDA finds serious violations due to missing or unreadable Black Box Warnings, the FDA can take legal actions against a company.

Let me present an example showing how automated visual regression tests for web sites and PDF-documents can be implemented to automatically verifies regulatory requirements.  

The FDA’s General Principles of Software Validations recommends using visual regression testing for images and documents. The FDA makes this recommendation for companies using a software development lifecycle (SDLC) approach that integrates risk management strategies with validation and verification activities, including defect prevention practices.

What is visual regression testing?

Visual regression testing expands regression testing, where a program, or parts of it, repeatedly gets tested after each modification. To additionally avoid unintentional changes in design elements, positioning, and colors, QA-Teams use visual regression testing as part of their testing strategy and general quality assurance.

Visual regression testing can discover visual defects, obvious or not, due to modifications to the software. In practice, a baseline of original or reference images is stored. This “source of truth” can be compared after each program modification against a collection of “new” screenshots of a user interface. Each difference against the baseline will be highlighted and can serve as an alert.

Additionally, visual regression testing doesn’t only look at differences between the source and current status. It has the possibility to compare the source against any historical status on a UI level, independent of HTML, CSS, and JavaScript differences. It can also be used to highlight differences between documents, like PDFs, in the layout, or the content itself. For example, a missing black box warning in a package insert for a drug would be marked as a difference.

How to best use visual regression testing?

Many visual testing tools, such as Selenium, mark differences between screenshots or PDF-documents as passed or failed. With visual regression testing, you can choose which differences across multiple browsers and devices to accept or not. For example, a picture, which is displayed in a different resolution on a web-page after a program change may cause a problem in completing a user-action due to overlapping with an FDA-required text (see Picture 3). This can cause a reporting to the FDA by a competitor and a warning letter would be sent to the legal department of the healthcare company.

Picture 3

Visual regression testing tools and libraries, like Wraith, Gemini, and other Selenium-related frameworks, needs a deep knowledge by testers and high effort in installation and setup. The Applitools AI-Platform, where no installation-, setup- or coding-knowledge is required, could be a great alternative to start automated visual regression testing.

The Applitools Eyes cross-environment testing feature allows you to test your application on multiple platforms using a single, common baseline. The match level (Strict, Layout, Content, Exact) determines the way by which Eyes compares the checkpoint image with the baseline image.

Additionally, the Applitools PDF Tool allows you to easily run visual UI tests on a collection of PDF-files, by placing them inside a directory (see Picture 4). It runs as a standalone jar file and can be invoked as a process by any programming language and in your continuous delivery pipeline.

Summary

If you want to continuously deliver new products and services within a software development lifecycle, at high speed while considering regulatory compliance regulations, you should have an eye on visual regression testing. It can be used for automated testing by comparing hundreds or thousands of artifacts like images and PDF-Documents at very much speed. Therefore, it provides long term cost efficiency by avoiding extensive manual tests, especially when dealing with frequent changes on a UI or document base.

The post Continuous visual regression testing to enable regulatory compliance in the healthcare sector appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/visual-regression-regulatory-compliance/feed/ 0
What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time https://app14743.cloudwayssites.com/blog/business-technology-leaders-app-quality/ https://app14743.cloudwayssites.com/blog/business-technology-leaders-app-quality/#respond Wed, 06 May 2020 19:44:35 +0000 https://app14743.cloudwayssites.com/?p=18191 We live in a day and age where web traffic and mobile app usage are at an all-time high. Recently, Verizon’s CEO recently reported that “In a week-over-week comparison, streaming...

The post What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

We live in a day and age where web traffic and mobile app usage are at an all-time high. Recently, Verizon’s CEO recently reported that “In a week-over-week comparison, streaming demand increased 12%, web traffic climbed 20%, virtual private networks, or VPN, jumped 30% and gaming skyrocketed 75%.”

According to the 2019 State of Automated Visual Testing Report, 400 leading software companies reported that today’s typical “Digitally Transformed” company now boasts 28 unique web and mobile applications, each with 98 pages or screens per app, each in five different screen sizes, and in six different languages. This amounts to about 90,000 page and screen variations accessible every day by customers of a typical company. Visual bugs that are common in such a variety of different screen variations, cost a typical company more than $2m a year in R&D related costs.

One of the most important things to remember is that the visual appearance of a company’s website or mobile app directly reflects on brand recognition. So how can organizations make sure their brand reputation remains impeccable and makes them stay ahead of the competition?

Representing time it takes to test apps
Photo by Jaelynn Castillo on Unsplash

Some say it takes only 50 milliseconds for users to form an opinion about a website or an app. Within this very small amount of time, people determine whether they like your site or not, whether they’ll stay or leave. And, as the saying goes “you get no second chance to make a good first impression.” So you need to make it look perfect on any device, any browser, and any screen size, from the first glance to the last. Any failure to do so can cause your customer to move to your competitor in a heartbeat. For more interesting stats about user experience, see https://www.sweor.com/firstimpressions.

These are the facts that should keep every one of us that has a “Digitally Transformed” business awake at night, looking for a solution as a top priority.

But is there a readily available solution to the above challenge?

The problem is that apps, websites and smart devices have proliferated to the point where any attempt by humans to manage visual and functional quality with the necessary timing and coverage is impossible. The number of screens and page variations is only expected to increase, and the software release cycles are only expected to become faster and faster to support Agile and CI/CD software development life cycles (SDLC). 

The only way to cope with this enormous problem in an automated way is by using Artificial Intelligence (AI).

According to Gartner’s Critical Capabilities for Software Test Automation (December 17, 2019), “61% of [its 2019 Software Quality Tools and Practices Survey] respondents said that AI/ML features would be very valuable in software testing tools. Improved defect detection (48%), reduction in test maintenance cost (42%), and improved test coverage (41%) were seen as the top benefits expected from incorporating AI/ML into test automation (multiple answers were allowed).”

Another Gartner research report, Gartner’s Innovation Insight for AI-Augmented Development (May 31, 2019) published by Mark Driver, Van Baker and Thomas Murphy recommends that “application leaders should embrace AI-augmented development now, or risk falling further behind digital leaders.”

In the specific situation I have described, a specific type of AI is needed: Visual AI.

Visual AI is composed of various AI algorithms that mimic the human eye and brain. It can do the work that tens of people would do in weeks, in a matter of minutes, while integrating with the entire app development toolchain and would respond in realtime in the most demanding timing constraints of CI/CD.

As a final note, I would like to say that as the co-Founder and CEO of the company that invented Visual AI and serves more than 400 enterprise customers, many of which are part of the Fortune 500, this kind of technology is quickly becoming top of mind for business and technology leaders. If you’re looking to disrupt your market and beat your competition through innovative software development and delivery practices, you must add Visual AI to your secret sauce, in order to lead your business to prosperity. It was true yesterday and it is many times more important in this challenging time!

Gil Sever is CEO and Co-Founder at Applitools

Cover Photo by Tomas Yates on Unsplash

The post What Business and Technology Leaders Should Know about the Quality of their Web and Mobile Apps in this Challenging Time appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/business-technology-leaders-app-quality/feed/ 0
How To Modernize Your Functional Testing https://app14743.cloudwayssites.com/blog/modern-functional-testing/ https://app14743.cloudwayssites.com/blog/modern-functional-testing/#respond Fri, 25 Oct 2019 00:28:30 +0000 https://app14743.cloudwayssites.com/blog/?p=6380 The first chapter compares modern functional testing with Visual AI against legacy functional testing with coded assertions of application output. Raja states that Visual AI allows for modern functional testing while using an existing functional test tool that relies on coded assertions results in lost productivity.

The post How To Modernize Your Functional Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

I’m taking Raja Rao’s course, Modern Functional Testing with Visual AI, on Test Automation University. This course challenged my perspective on software testing. I plan to summarize a chapter of Raja’s course each week to share what I have learned.

The first chapter compares modern functional testing with Visual AI against legacy functional testing with coded assertions of application output. Raja states that Visual AI allows for modern functional testing while using an existing functional test tool that relies on coded assertions results in lost productivity.

To back up his claim, Raja shows us a typical web app driven to a specific behavior. In this case, it’s the log-in page of a web app, where the user hasn’t entered a username and password, and yet selected the “Enter” button, and the app responds:

“Please enter username and password”

Screen Shot 2019 10 22 at 3.25.41 PM

From here, an enterprising test engineer will look at the page above and say:

“Okay, what can I test?”

There’s the obvious functional response. There are other things to check, like:

  • The existence of the logo at top
  • The page title “Login Form”
  • The title and icons of the “Username” and “Password” text boxes
  • The existence of the “Sign in” button and the “Remember Me” checkbox – with the correct filler text
  • The Twitter and Facebook icons below

You can write code to validate that all these exist. And, if you’re a test automation engineer focused on functional test, you probably do this today.

Raja shows you what this code looks like:

Screen Shot 2019 10 22 at 3.55.24 PM

There are 18 total lines of code. There is a single line of navigation code to go to this page. One line of directs the browser to click the “Sign In” button. The remaining 16 lines of code assert that all the elements exist on the page. Those 16 lines of code include 14 identifiers (IDs, Name, Xpath, etc.) that can change, and 7 references to hard-coded text.

You might be aware that the test code can vary from app version to app version, as each release can change some or all of these identifiers.

Why do identifiers change? Let’s describe several reasons:

  1. Identifiers might change, while the underlying content does not.
  2. Some identifiers have different structures, such as additional data that was not previously tested.
  3. Path-based identifiers depend on the relative location on a page, which can change from version to version.

So, in some cases, existing test code misses test cases. In others, existing test code generates an error even when no error exists.

If you’re a test engineer, you know that you have to maintain your tests every release. And that means you maintain all these assertions.

Implications For Testing

Let’s walk through all the tests for just this page. We covered the negative condition for hitting “Sign In” without entering a username or password. Next, we must verify the error messages and other elements if one, or the other, field has data (the response messages may differ). Also, we need to handle the error condition where the username or password is incorrect.

We also have to worry about all the cases where a valid login moves to a correct next page.

Okay – lots of tests that need to be manually created and validated. And, then, there are a bunch of different browsers and operating systems. And, the apps can run on mobile browsers and mobile devices. Each of these different instances can introduce unexpected behavior.

What are the implications of all these tests?

Screen Shot 2019 10 22 at 4.31.51 PM

Raja points out the key implication. Workload. For every team besides QA, it matters to make the app better – which means changes. A/B testing, new ideas – applications need to change. And every version of the app means potential changes to identifiers – meaning that tests change and need to be validated. As a result, QA ends up with tons of work to validate apps. Or, QA becomes the bottleneck to all the change that everyone else wants to add to the app.

In fact, every other team can design and spec their change effectively. Given the different platforms that QA must validate, the QA team really wants to hold off changes. And, that’s a business problem.

Visual Validation with Visual AI – A Better Way

What would happen if QA had a better way of validating the output – not just line by line and assertion by assertion? What if QA could take a snapshot of the entire page after an action and compare that with the previous instance?

QA Engineers have desired visual validation since browser-based applications could run easily on multiple platforms. And, using Applitools, Raja demonstrates why visual validation tools appeal so much to the engineering team.

In this screen, Raja shows that the 18 lines of code are down to five, and the sixteen lines of validation code are down to three.    The three lines of validation code read:

https://gist.github.com/batmi02/b5174f538e13e3226dba7fac61fc2afc

So, we have a call to the capture session to start, a capture, and the close of the capture session. None of this code refers to a locator on the page.

Crazy as it may seem, visual validation code requires no identifiers, no Xpath, no names. Two pages with different build structures but identical visual behavior are the same to Applitools.

From a coding perspective, test code becomes simple to write:

  1. Drive behavior (with your favorite test runner)
  2. Take a screenshot

You can open a capture session and capture multiple images. Each will be treated as unique images for validation purposes within that capture session.

Once you have an image of a page, it becomes the baseline.  The Applitools service compares each subsequently captured image to the baseline. Any differences get highlighted, and you, as the tester, identify the differences that matter as either bugs or new features.

Handling Code Changes With Legacy Functional Test

The big benefit of visual validation with Visual AI comes from comparing new code with new features to old code.

When Raja takes us back to his example page, he now shows a new version of the login page which has several differences – real improvements you might look to validate with your test code.

Screen Shot 2019 10 23 at 11.26.45 AM

And, here, there are bugs with the new code. But, does your existing test code capture all the changes?

Let’s go through them all:

  1. Your existing test misses the broken logo (-1). The logo at the top no longer links to the proper icon file. Did you check for the logo? If you checked to see that the reference icon file was identical, your code misses the fact that the response file is a broken image.
  2. Your existing test misses the text overlap of the alert and the “Login Form” text (-1). The error message now overlaps the Login Form page title. You miss this error because the text remains identical – though their relative positions change
  3. Your existing test catches the new text in the username and password boxes (+2). Your test code correctly identifies that there is new prompt text in the name and password boxes and registers an error. So, your test shows two errors.
  4. Your existing test misses the new feature (-1). The new box with “Terms and Conditions” has no test. It is a new feature, and you need to code a test for the new feature.

So, to summarize, your existing tests catch two bugs (different locators for the username and password field), miss two bugs (the broken logo and the text and alert overlap), and don’t have anything to say about the new feature. You have to modify or write three new tests.

But wait! There’s more!

Screen Shot 2019 10 23 at 11.26.45 AM 1

  • Your existing test gives you two false alarms that the Twitter and Facebook links are broken. Those links at the bottom used Xpath locators – which got changed by the new feature. Because the locators changed, these now show up as errors – false positives – that must be fixed to make the test code work again.

Handling Visual Changes with Visual AI

With Visual AI in Applitools, you actually capture all the important changes and correctly identify visual elements that remain unchanged, even if the underlying page structure is different.

Screen Shot 2019 10 23 at 11.38.40 AM

Visual AI captures:

  • 1 – Change – the broken logo on top
  • 2 – Change – The login form and alert overlap
  • 3 – Change – The fact that the alert text has moved from version to version (the reason for the text overlap)
  • 4 – Change/Change – The changes to the Username and Password box text
  • 5 – Change – the new feature of the Terms and Conditions text
  • 6 – No Change – Twitter and Facebook logos remain unmoved (no false positives)

So, note that Visual AI captures changes in the visual elements. All changes get captured and identified. There is no need to look at the test code afterward and ask, “what did we miss?” There is no need to look at the test code and say, “Hey, that was a false positive, we need to change that test.”

Comparing Visual AI and Legacy Functional Code

With Visual AI, you no longer have to look at the screen and determine which code changes to make. You are asked to either accept the change as part of the new baseline or reject the change as an error.

How powerful is that capability?

Well, Raja makes a comparison of the work an engineer puts in to do validation using legacy functional testing and functional testing with Visual AI.

Screen Shot 2019 10 23 at 11.54.37 AM

With legacy functional testing:

  • The real bug – the broken logo – can only be uncovered by manual testing. Once it is discovered, a tester needs to determine what code will find the broken representation. Typically, this can take 15 minutes (more or less). And you need to inform the developers of the bug.
  • The visual bug – the text overlap – can only be uncovered by manual testing. Once the bug is discovered, the tester needs to determine what code will find the overlap and add the appropriate test (e.g. a CSS check). This could take 15 minutes (more or less) to add the test code. And, you need to inform the developers of the bug.
  • The intentionally changed placeholder text for Username and Password text boxes need to be recoded, as they are flagged as errors. This could take 10 minutes (more or less).
  • The new feature can only be identified by manual validation or by the developer. This test needs to be added (perhaps 5 minutes of coding). You may want to ask the developers about a better way to find out about the new features.
  • The false positive errors around the Twitter and Facebook logos need to be resolved. The Xpath code needs to be inspected and updated. This could take 15 minutes (more or less)

In summary, you could spend 60+ minutes, or 3,600+ seconds, for all this work.

In contrast, automated visual validation with Visual AI does the following:

  • You find the broken logo by running visual validation. No additional code or manual work needed. Incremental time: zero seconds. Alert developers of the error: 2 seconds. Alerting in Applitools can send a bug notification to developers when rejecting a difference.
  • Visual validation uncovers the text overlap and moved alert text. Incremental time: zero secondsAlert developers of the error: 2 seconds. Alerting in Applitools can send a bug notification to developers when rejecting a difference.
  • Visual validation identifies the new text in the Username and Password text boxes. Visual validation workflow lets you accept the visual change as the new baseline (2 seconds per change – or 4 seconds total)
  • You uncover the new feature with no incremental change or new test code, and you accept the visual change as the new baseline (2 seconds).
  • The Twitter and Facebook logos don’t show up as differences – so you have no work to do (0 seconds)

So, 10 seconds for Visual AI. Versus 3,600 for traditional functional testing. 360X faster.

Let’s Get Real

I would think that a productivity gain of 360X might appear unreasonable. So did Raja. When he went through the real-world examples for writing and maintaining tests, he came up with a more reasonable-looking table.

Screen Shot 2019 10 23 at 12.15.46 PM

For just a single page, in development and just a single update, Raja concluded that the maintenance costs with Visual AI remain about 1000x better, and the overall test development and maintenance would be about 18x faster. Every subsequent update of the page would be that much faster with Visual AI.

In addition, Visual AI catches all the visual differences without test changes and ignores underlying changes to locators that would cause functional tests to generate false positives. So, the accuracy of Visual AI ends up making Visual AI users much more productive.

Finally, because Visual AI does not depend on locators for visual validation, Visual AI ends up depending only on the action locators – which would need to be maintained for any functional test. So, Visual AI becomes much more stable – again leading to Visual AI users being much more productive.

Raja then looks at a more realistic app page to have you imagine the kind of development and test work you might need to ensure the functional and visual behavior of just this page.

Screen Shot 2019 10 23 at 12.25.42 PM

For a given page, with calculators, data sorting, user information… this is a large amount of real estate that involves display and action. How would you ensure proper behavior, error handling, and manage update?

Extend this across your entire app and think about how much more stable and productive you can be with Visual AI.

Chapter 1 Summary

Raja summarizes the chapter by pointing out that visual validation with Visual AI requires only the following:

  1. Take action
  2. Take a screenshot in Visual AI
  3. Compare the screenshot with the baseline in Visual AI

Screen Shot 2019 10 23 at 12.27.50 PM

That’s it.  Visual AI catches all the relevant differences without manual intervention. Using Visual AI avoids both the test development tasks for ensuring that app responses match expectations, and it eliminates the more difficult steps of maintaining tests as part of app enhancement.

Next Up – Chapter 2.

For More Information

The post How To Modernize Your Functional Testing appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/modern-functional-testing/feed/ 0
What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) https://app14743.cloudwayssites.com/blog/top-test-teams-sovt/ https://app14743.cloudwayssites.com/blog/top-test-teams-sovt/#respond Wed, 14 Aug 2019 00:14:29 +0000 https://app14743.cloudwayssites.com/blog/?p=6033 Learn about what top test teams are doing to improve coverage and accelerate time-to-market. This is not some fluff piece with vague ideas of what might (or might not) work. Read along, and we promise it will be worth your time.

The post What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

We know that you’re thinking, “Really? Another fluff post about the top people in my profession and what I need to do to be just like them?”

Listen, we hear you, but this is not some fluff piece with vague ideas of what might (or might not) work. Read on, follow along for the next few weeks, and we promise it will be worth your time! If not, write to us directly and tell us what we can do better next time. We will reply!

In case you missed it, on May 27th the 2019 State of Automated Visual Testing was released. Based on independent research sourced from over 350 testing teams around the world, we learned that 12% of you are getting much better results than the other 88%. We’re talking four times more successful as measured by the things you (and your boss, and bosses’ boss, and your bosses’ bosses’ boss) really care about – test coverage, release velocity, application quality, overall R&D teamwork, and cold hard cash! This is not our opinion. It’s not subjective. It’s objective data and information. Data and information that came from you and your peers.

In other words, we’re not asking you to take our word for it; we’re asking you to take your word for it. Take a moment to download the full report. We will be here when you get back!

Click here to download the report.

So what separates the top 12% from the other 88%?

Digital Transformation. Two words that have been baked into our world over the past 20 years. To such an extent that it may be fair to call it fluff? Fluff or not, IDC forecasts that worldwide spending on technologies and services that enable digital transformation will reach $1.97 trillion in 2022, per the (IDC) Worldwide Semiannual Digital Transformation Spending Guide.

(If you think a trillion is an abstract concept, check this out.)

Dang. That’s a lot of fluff! When your bosses’ bosses’ boss is spending that kind of cash, it’s always worth paying attention. It could be good for your career. As it turns out, 12% of the world’s testing teams did pay attention, and it’s their approach to managing the challenges of digital transformation that have set them apart.

The Testing World’s Digital Divide – Digital Transformation Quantified.

You can read up on The Enterpriser Projects CIO level take on Digital Transformation (warning: fluff alert) here, or you can quantify it for yourself with some simple math and see how you compare to other R&D and testing teams around the world. Got that calculator ready? Here we go…

  • How many applications do you have in production today? (Don’t forget those native mobile apps, they really add up).
  • How many pages or screens do you have in production on average for each of these applications? (Single Page Applications can be tricky we know, but give it your best guess).
  • How many viewport breaks do you support? (The market average is six if you are not sure.)
  • How many human languages do you support? (We’re talking about localization here – English, German, Spanish, Chinese, Japanese – not coding languages in case you’re wondering).

Now multiply those four numbers together. Congrats! You have just quantified the digital footprint of your business. People can write about digital transformation all they want, but you’ve just transformed all that fluff into something quite real. It’s probably a big number. Over 90,000 for a typical company and over 624,000 for the largest 30% of companies in the world. Here’s the kicker. You and your R&D team are responsible for managing the visual and functional quality of every single one of those pages and screens.

We categorized the number of screens in production by industry, based on your responses. Here’s what we discovered.

When functional test automation first emerged way back in 1989 with the launch of Mercury Interactive XRunner, it was a much different time. Browsers didn’t even exist. Since then, the browser wars have come and gone and standards are now in place. Applications have grown far more complex with native mobile, dynamic content, single page applications (SPA) and responsive design now an everyday reality. Digital transformation for any size company is officially past tense. We’re not transforming, we’re transformed. And now you have to deal with it, but how?

What Defines a Top 12% Global Testing Team?

Top 12% teams have overcome the massive challenge posed by digital transformation. Like any technical challenge we have ever faced, it started with someone on the team who felt the pain and set out to solve for it. And when they did, good things happened.

Test coverage increased by 60%.

Release velocity became 2.8x faster (even though coverage increased 60%)

Visual and functional quality improved 3x with far fewer escape bugs (even though they release 2.8x times faster)

R&D teams are 4x more satisfied with their visual and functional quality outcome (yea, I’d be more satisfied too with those types of results!)

All of this despite the fact that this 12% of testing teams were managing applications 2.2x larger than the other 88%. In all likelihood, they felt the pain before most of us, and have now led the way for the rest of us. We just need to follow.

Goodbye, But Only For A Week or So.

Today successful continuous management of application visual quality creates competitive advantage. Business leaders know this and are paying attention. As a result, testers are in a better position than ever to be heroes respected by their R&D teams for driving huge value for the companies they work for.

Over the next 10-12 weeks, Patrick and I will release a series of blogs that explain how these 12% of testing teams reinvented their testing approach to deal successfully with the challenges of Digital Transformation. We will get into the details as promised.

Or, if you simply can’t wait and want to ignore my spoiler alert, you can listen on-demand to the webinar Patrick and I hosted together entitled Wrong Tool, Wrong Time: Re-Thinking Test Automation, read the blog and view the slides here, or just reach out to us for help with your testing approach.

Until next time.

Yours Truly,

James the #NotSoEvilMarketingGuy

Patrick the #GuyWhoActuallyHasDoneTestingFor20Years

Find Out More

Check out Applitools blogs about digital transformation.

Read about how Microsoft incorporates visual testing to their DevOps delivery.

Request a demo of Applitools Eyes.

Sign up for a free Applitools account.

 

The original version of this blog post previously ran on devops.com

 

The post What The World’s Top 12% Testing Teams Are Doing (That You Can Do, Too) appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/top-test-teams-sovt/feed/ 0
Is AI Required for Successful Digital Transformation? https://app14743.cloudwayssites.com/blog/ai-digital-transformation/ https://app14743.cloudwayssites.com/blog/ai-digital-transformation/#respond Thu, 27 Jun 2019 00:33:13 +0000 https://app14743.cloudwayssites.com/blog/?p=5737 In today’s visual economy, customers are increasingly interacting with companies through a screen. For this reason, digital transformations are all about delivering quality at high velocity along with stability. To do this, leading firms of all sizes and verticals are embracing AI as a critical layer in their tech stack.

The post Is AI Required for Successful Digital Transformation? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Girl talking to robot in store

Gartner recently published a report that said: “application leaders should embrace AI-augmented development now, or risk falling further behind digital leaders.”* In today’s visual economy, customers are increasingly interacting with companies through a screen. For this reason, digital transformations are all about delivering quality at high velocity along with stability. To do this, leading firms of all sizes and verticals are embracing AI as a critical layer in their tech stack.

Why does AI matter to Digital Transformation?

Because it can help automate the error-prone and mundane tasks that often pull humans away from higher value tasks for an organization. The Gartner report states, “while early AI-augmented development addresses mundane, ‘plumbing’ tasks, this is exactly where application leaders can benefit from further automation and cost reduction.”*

However, we know that simply using AI for its own sake won’t get you very far. While these types of technological advancements can certainly help increase your productivity and increase the velocity of your software development pipeline, the areas where you can get the most proverbial “bang for your buck” is in processes that are either 1) entirely manual or 2) extremely frequent tasks (or sometimes both).

According to Gartner, “today’s statistical techniques are inadequate for optimizing testing, especially when changes to applications are frequent and where large software assets already exist that use a wide variety of microservices. Development and quality assurance (QA) in organizations cannot keep pace with the rate of innovation needed, due to: a heavy reliance on manual testing, skills gaps, insufficient resources, and an inability to scale technologies and processes. AI and ML are particularly suited to support the complex test automation required for back-end services within a mesh app and service architecture.* For more information from Gartner regarding microservices, visit 4 Steps to Design Microservices for Agile Architecture.**

__________________________________________________________

Gartner’s ‘Innovation Insight for AI-Augmented Development’ report is available to subscribers: (https://www.gartner.com/doc/3933974).

__________________________________________________________

Why is Digital Transformation difficult?

Consider the task of visually inspecting the user interface of an application or webpage. Oftentimes, this is still a job that is reserved for human eyes only. Someone — or rather a team of people — need to physically sit down and scan thousands of webpages across multiple browser types and devices, looking for inconsistencies and errors.

According to our 2019 State of Automated Visual Testing Report, today’s typical “Digitally Transformed” company now boasts 28 unique web and mobile applications, each with 98 pages or screens per app, viewable in five different screen sizes, and read in six different human languages. This amounts to around 90,000 screen variations accessible every day by customers. Here’s how that breaks down by industry:

image1 1
Source: 2019 State of Automated Visual Testing.

90,000 screens. That’s a lot.

In fact, if you laid all those screens end to end, they’d extend over 17 miles (27 kilometers).

Imagine walking that distance, carefully checking a screen every step of the way. How long would it take you? Would you like to do that with every test run?

image2 1
Source: @7seth on Unsplash.

Probably not.

Because of this, we’ve found that a typical company incurs between $1.8M and $6.9M annually due to visual bugs that have escaped into production. Ouch!

Source: @jpvalery on Unsplash.

What this data tells us is that it is not reasonable for humans to find the small inconsistencies in a webpage, especially ones that we are extremely familiar with. Just as you will “gloss over” a misspelling in a sentence when you know what it is supposed to say, it is very natural to “gloss over” a visual error when you know the color it’s supposed to be, or where the button is supposed to be positioned, or how the columns are supposed to align. But for your customers, who didn’t spend countless hours working on the website or application, they will most certainly notice the mistakes that you’ve missed.

In addition to the issue of familiarity, there is also the challenge of scalability. Reviewing 90,000 variations of how a screen looks is simply not sustainable in a time where the velocity of software release cycles is only getting higher.

So, realistically, what can be done?

Visual AI to the Rescue

We created Visual AI technology that emulates the human eye and brain with computer vision algorithms. These algorithms report only the visual differences that are perceptible to users. Also, they ignore insignificant rendering, size, and position differences that users won’t notice.

We’ve tuned these algorithms to instantly validate entire application pages, detect layout issues, and process the most complex and dynamic pages. And the best part? There’s no calibration, training, tweaking or thresholding required. Through years of engineering, we’ve gotten it to work with 99.9999% accuracy. That’s one false positive out of one million screens tested.

Source: @artmatters on Unsplash.

Using this Visual AI technology our AI-powered visual testing platform, Applitools Eyes, helps increase overall app UI test coverage by 60 percent and improves the overall visual quality of apps by a factor of three. With our Ultrafast Grid, you can gain the power our low-code, high availability, and easy-to-use visual testing platform to support your entire digital transform team: product owners, developers, test automation engineers, manual testers, DevOps, and marketing teams. Reach out or sign up for an Applitools demo today!

 

*Gartner, Innovation Insight for AI-Augmented Development; Analyst(s): Mark Driver, Van Baker, Thomas Murphy, Published: 31 May 2019

**Smarter with Gartner, 4 Steps to Design Microservices for Agile Architecture, 7 August 2018, https://www.gartner.com/smarterwithgartner/4-steps-to-design-microservices-for-agile-architecture/

 

The post Is AI Required for Successful Digital Transformation? appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/ai-digital-transformation/feed/ 0
Using Genymotion, Appium & Applitools to visually test Android apps https://app14743.cloudwayssites.com/blog/genymotion-appium-android/ https://app14743.cloudwayssites.com/blog/genymotion-appium-android/#respond Thu, 13 Jun 2019 05:29:12 +0000 https://app14743.cloudwayssites.com/blog/?p=5553 How to use Genymotion, Appium, and Applitools to do visual UI testing of native mobile Android applications.

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

If you want to run mobile applications, you want to run on Android. Android devices dominate the smartphone market. Genymotion allows you to run your Appium tests in parallel on a range of virtual Android devices. Applitools lets you rapidly validate how each device renders each Appium test. Together, Genymotion and Applitools give you coverage with speed for your functional and visual tests.

As a QA automation professional, you know you need to test on Android.  Then you look at the market and realize just how fragmented the market is.

How fragmented is Android?

Fragmented is an understatement. A study by OpenSignal measured over 24,000 unique models of Android devices in use, running nine different Android OS versions across over three dozen different screen sizes, manufactured by 1,294 distinct device vendors. That is fragmentation. These numbers are mind-boggling, so here’s a chart to explain. Each box represents the usage share of one phone model.

Plenty of other studies confirm this. There are 19 major vendors of Android devices. Leading manufacturers include Samsung, Huawei, OnePlus, Xiaome, and Google. The market share of the leading Android device is less than 2% of the market, and the market share of the top 10 devices is 11%. The most popular  Android version accounts for only 31% of the market.

We would all like to think that Android devices behave exactly the same way.  But, no one knows for sure without testing. If you check through the Google Issue Tracker, you’ll find a range of issues that end up as platform-specific.

Implications for Android Test Coverage

So, if every Android device might behave differently, exactly how should you test your Android apps? One way is to run the test functionally on each platform and measure behavior in code – that’s costly. Another way is to run functionally on one platform and hope the code works on the others. Functionally, this can tell you that the app works – but you are left vulnerable to device-specific behaviors that may not be obvious without testing.

To visualize the challenge of testing against 24,000 unique platforms, imagine your application has just 10 screens. If you placed these ten different screens on 24,000 unique devices end-to-end, they would stretch over 30 miles. That’s longer than the distance of a marathon!

Could you imagine manually checking a marathon’s worth of screens with every release?

I can’t run a marathon, much less do while examining thousands of screens. Thankfully there’s a better way, which I’ll explain in this post: using Genymotion, Appium, and Applitools.

What is Genymotion?

Genymotion is the industry-leading provider of cloud-based Android emulation and virtual mobile infrastructure solutions. Genymotion frees you from having to build your own Android device farm.

Once you integrate your Appium tests with Genymotion Cloud, you can run them in parallel across many Android devices at once, to detect bugs as soon as possible and spend less time on test runs. That’s powerful.

With Genymotion Cloud, you can choose to test against just the most popular Android device/OS combinations. Or, you can test the combinations for a specific platform vendor in detail. Genymotion gives you the flexibility to run whatever combination of Androids you need.

pasted image 0 3

Why use Genymotion Cloud & Applitools?

Genymotion Cloud can run your Android functional tests across multiple platforms. However, functional tests are a subset of the device and OS version issues you might encounter with your application. In addition to functional tests. you can run into visual issues that affect how your app looks as well as how it runs. How do you run visual UI tests with Genymotion Cloud? Applitools.

Applitools provides AI-powered visual testing of applications and allows you to test cross-platform easily to identify visual bugs. Visual regressions seem like they might be simply a distraction to your customers. At worst, though, visual errors block your customers from completing transactions. Visual errors have real costs – and without visual testing, they often don’t appear until a user encounters them in the field.

Here’s one example of what I’m talking about. This messed-up layout blocked Instagram from making any money on this ad, and probably led to an upset customer and engineering VP. All the elements are present, so this screen probably passed functional testing.

pasted image 0 1

You can find plenty of other examples of visual regressions by following #GUIGoneWrong on Twitter.

Applitools uses an AI-powered visual testing engine to highlight issues that customers would identify. More importantly, Applitools ignores differences that customers would not notice. If you ever used snapshot testing, you may have stopped because you tracked down too many false positives. Applitools finds the issues that matter and ignores the ones that don’t.

How to use Genymotion, Appium & Applitools?

Applitools already works with Appium to provide visual testing for your Android OS applications. Now, you can use Applitools and Genymotion to run your visual tests across numerous Android virtual devices.  To sum up:

  1. Write your tests in Appium using the Applitools SDK to capture visual images.
  2. Launch the Genymotion cloud devices via command line.
  3. Your Appium scripts will run visual tests across the Genymotion virtual devices.

That’s the overview. To dive into the details, check out this step-by-step tutorial on using Genymotion, Appium, and Applitools.

While it’s pretty complete, here’s some additional information you’ll need:

We’ve put together a series of step-by-step tutorial videos using Genymotion, Appium, and Applitools. Here’s the first one:

https://www.youtube.com/watch?v=qXuMglfNEeo

Genymotion, Appium, and Applitools: Better Together

When you run Appium, Applitools, and Genymotion together, you get a huge boost in test productivity. You get to re-use your existing Appium test scripts. Genymotion lets you run all your functional and visual tests in parallel. And, with the accuracy of Applitools AI-powered visual testing, you track down only issues that matter, without the distraction of false positives.

Find Out More

Read more about how to use our products together from this Genymotion blog post.

Visit Applitools at the Appium Conference 2019 in Bengaluru, India.

Sign up for our upcoming webinar on July 9 with Jonathan Lipps: Easy Distributed Visual Testing for Mobile Apps and Sites.

Find out more about Genymotion Cloud, and sign up for a free account to get started.

Find out more about Applitools. You can request a demo, sign up for a free account, and view our tutorials.

 

The post Using Genymotion, Appium & Applitools to visually test Android apps appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/genymotion-appium-android/feed/ 0
Wrong Tool, Wrong Time: Re-thinking Test Automation https://app14743.cloudwayssites.com/blog/state-of-visual-testing-research-webinar/ https://app14743.cloudwayssites.com/blog/state-of-visual-testing-research-webinar/#respond Fri, 07 Jun 2019 14:24:36 +0000 https://app14743.cloudwayssites.com/blog/?p=5463 Watch this on-demand webinar, and learn how the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process. This session included live polls taken during the recording -- so you can compare your team's results to those of your colleagues, and see how you rank.

The post Wrong Tool, Wrong Time: Re-thinking Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>

Watch this on-demand session to learn: What Are The World’s Most Innovative Test Automation Teams Doing That You Are Not?

Speakers: James Lamberti -- Applitools CMO (left), and Patrick McCartney -- Director of Customer Success engineering @ Applitools (right)
Speakers: James Lamberti — Applitools CMO (left), and Patrick McCartney — Director of Customer Success engineering @ Applitools (right)

As much as we all hate to admit it, our test automation efforts are struggling. Coverage is dropping. Bugs are escaping to production. Our apps are visually complex, growing rapidly, delivered continuously, and changing constantly – so much so that our functional framework is now bloated, broken, and unable to keep up with Agile and CI-CD release best practices.

No wonder that in our latest State of Visual Testing research, the majority of companies surveyed reported that their CI-CD and automation processes are not helping them to successfully compete in today’s fast-paced ecosystem, and are not effective in ensuring software quality in a scalable and robust way.

But what about those elite testing teams that got it right? What’s their secret? Can we copy what they did, instead of setting ourselves to fail?

Watch this on-demand session, and learn how the 10% of the world’s most innovative testing teams have reinvented their test automation to support a fully automated CI-CD process, and guaranteed their company’s digital transformation was a success.

Watch this webinar to learn:

  • Why the majority of test automation efforts are falling behind
  • How your QA and testing efforts compare to these elite teams — via live polling results
  • 4 modern techniques that the top 10% of testing teams globally are doing every day, and that you can do too

Slide deck:

Full webinar recording:

Additional Materials and Recommended reading:

  1. State of Visual Testing Research Report — Click here to download your copy of the Executive Summary Whitepaper
  2. Webinar: DevOps & Quality in The Era Of CI-CD: An Inside Look At How Microsoft Does It — with Abel Wong of Microsoft Azure DevOps
  3. How to Run 372 Cross Browser Tests In Under 3 Minutes — post by Jonah Stiennon
  4. How Visual Regression Testing Can Help You Deliver Better Apps — post by Jay Phelps
  5. Want to see Visual AI in action? Contact us, and we’ll get one of our solution architects to show you around!
  6. Release Apps with Flawless UI: Open your Free Applitools Account, and start visual testing today.
  7. Improve your test automation skills, and build your resume for success — with Test Automation University! The most-talked-about test automation initiative of 2019: online education platform led by Angie Jones, offering free test automation courses by industry leaders. Enroll, and start showing off your test automation certificates and badges!

— HAPPY TESTING —

 

The post Wrong Tool, Wrong Time: Re-thinking Test Automation appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/state-of-visual-testing-research-webinar/feed/ 0
What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples https://app14743.cloudwayssites.com/blog/examples-software-ui-bugs/ https://app14743.cloudwayssites.com/blog/examples-software-ui-bugs/#respond Fri, 11 Jan 2019 16:46:48 +0000 https://app14743.cloudwayssites.com/blog/?p=3394 Take a guess: how long have we been dealing with software bugs? It’s not 30 years, around the time Windows was first released. It’s not 48 years, the start of...

The post What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
Apple iOS home screen visual bug

Take a guess: how long have we been dealing with software bugs?

It’s not 30 years, around the time Windows was first released.

It’s not 48 years, the start of the Unix epoch.

It’s actually much longer. 71 years and 2 days, to be exact. Here’s why.

Back on September 9, 1947, Grace Hopper, a Harvard computer scientist, was running tests on a calculator and found calculation errors. She did some investigation and found a moth that had landed between two solenoid contacts, shorting out an electromechanical relay. Apparently, the bug had been attracted by the warmth of the machine.

We now commemorate this occasion every September 9, Tester’s Day.

As you can see in her logbook entry below, dated September 9, the actual offending moth was taped to the page. So not only is the first known example of a software bug, it’s probably the most tangible example of one as well.

https://upload.wikimedia.org/wikipedia/commons/8/8a/H96566k.jpg
The first known bug, via Wikipedia

71 years after Grace Hopper’s discovery, software continues to be infested with bugs of a more modern variety. Some of these have been pretty spectacular.

Like that time in the 80s when the entire world could have been destroyed due to a software bug.

Really.

The Bug to end all Bugs

Here’s what happened: a Soviet early warning system showed that five American nuclear missiles were flying to Russia.

You have to understand that this was during a particularly tense time during the Cold War, since the Soviets had shot down a Korean passenger jet three weeks earlier. And the United States and USSR both had over 50,000 nuclear weapons, each of which could destroy a city.

Thankfully, the Russian commander that saw this ignored the warnings, believing (correctly) that if the US were to attack the Soviet Union, it wouldn’t launch just five missiles. When the early warning system was later analyzed, it was later found to be riddled with bugs.

This guy deserves the Nobel Peace Price

Thankfully, the bugs we’re seeing in 2018 are a bit less alarming.

But that said, they’re still pretty annoying in our day-to-day life, given how dependent we are these days on software. Let’s dive into some of them.

Trippy Text Layout

Visual Bug on TripAdvisor App
Overlapping text on TripAdvisor App

On the TripAdvisor mobile app, ratings are overlaid with the hotel name, making it so that, in some cases, you can’t read either. This doesn’t exactly encourage potential guests to make a booking on their app. And that’s a problem given how many travel booking apps there are out there.

Wanna Get A Way (to Book)

Visual Bug on Southwest Airlines App
Text blocking buy button on Southwest Airlines website

On the Southwest Airlines website, a visual bug prevented customers from clicking the Continue button and actually buying a ticket. The visual bug was that their Terms and Conditions text was overlaid on top of the Continue button. Southwest drives about $2.5M through their website every hour. So even if this bug was up for a short time, it would have cost them a lot.

The airline industry is very competitive. Not wanting to be left behind, United Airlines has done their part to cut off their revenue by hiding their purchase button behind text.

Text blocking buy button on United Airlines website
Text blocking buy button on the United Airlines website

SerchDown

Visual Bug on ThredUp Website
Search box blocking shopping cart on ThredUp Website

The ThredUp website prominently provides a convenient search field on its homepage. But it’s not so convenient to block access to buttons to view your shopping cart, or sign in to view your account.

No Order for You

Visual Bug on Amazon App
Off-screen quantity popup on Amazon App stopping the buy process

On Amazon’s mobile app, there was a visual bug that prevented users from continuing the purchase process if they tried to switch their order quantity to something other than one. It’s like the software version of everyone’s favorite restaurant worker.

I’m Feeling Unlucky

No search on the Google website
No search on the Google website

For years, Google’s homepage has been minimalist in design so it loads quickly and they can help users find what they need and get on their way. However, this rendering of their website, using Chrome on macOS, seems to be taking minimalism a bit far.

Maybe privacy really is dead

Public and Private options mashed together on LinkedIn Sales Navigator
Public and Private options mashed together on LinkedIn Sales Navigator

Privacy permissions on social are a big deal. Some things you might okay with sharing publicly, and others you’ll want to share with just your network. With LinkedIn‘s overlapping privacy choices, whether a post is public or private can be a roll of the dice.

Repetitive Redundancy

Repeated company listing on LinkedIn
Repeated company listing on LinkedIn

Speaking of LinkedIn, they’re a bit repetitive here…

Repetitively Repetitive Redundancy

Repeated discount notice on Banana Republic website
Repeated discount notice on Banana Republic website

With another repetition-based visual bug, we’re treading into dead-horse-beating territory here, but Banana Republic really, really wants to let you know that all denim and pants are 40% off.

Alexa, are you done yet?

Text stays in "move mode" on Amazon Alexa app
Text stays in “move mode” on Amazon Alexa app

On the Amazon Alexa mobile app, if you rearrange the order of the podcasts in your flash briefing, the app will still appear as if a podcast is still “settling in” to its new location, and the app will appear to be hung.

Home Screen Blues

Overlapping text on Apple iOS home screen
Overlapping text on Apple iOS home screen

Apple’s iPhone home screen can sometimes improperly position the message that no older notifications exist.

Craigslist not looking so bad

Super narrow column in Facebook Marketplace website
Very narrow column in Facebook Marketplace website

There’s probably some really useful text in that leftmost column of the Facebook Marketplace. We’re just not sure what it is.

What language is that?

Improperly rendered special character on Air France website
Improperly rendered special character on Air France website

You never know what languages you’ll encounter on an airplane. This is why, on the Air Canada site, they explain their commitment to speaking to customers in their preferred official language, whether it be English, Chinese, or whatever that third choice is. (I’m surprised French isn’t listed.)

So why does software have visual bugs?

None of these examples are intended to throw developers under the bus. Writing code is hard.

Anticipating every possible scenario is near impossible. Your users continuously interact with all kinds of devices with a dizzying variety of operating system versions, browser versions, screen sizes, font sizes, and languages.

When you multiply all these together, the number of different combinations can easily be in the tens of thousands.

That’s a heck of a lot more than this pile of phones: 

Keep Calm and Pile Your Phones

So yeah, life’s not easy for developers.

At the same time, your web or mobile app is the now the front door of your business for an increasing number of users. And you have to ensure that storefront doesn’t have any visual glitches.

Unlike these guys: 

Die Thru?
Is that a new Bruce Willis movie?

Or these guys

Conclusion

But back to the software world. There, visual perfection can mean the difference between one of your customers loving or hating your product. That why at Applitools, we want to help developers and testers come together to find one class of bugs — visual bugs — as quickly as possible through visual UI testing.

We might not save the world, but hopefully, we’ll save you a bit of time in getting a visually perfect app shipped into production.

To read more about Applitools’ visual UI testing and Application Visual Management (AVM) solutions, check out the resources section on the Applitools website. To get started with Applitools, request a demo or sign up for a free Applitools account.

What bugs have you seen in web or mobile apps? Tweet them out with hashtag #GUIGoneWrong. If we like your entry, we’ll ship you one of our “Visually Perfect” shirts.

 

The post What Types of Software UI Bugs Are We Seeing in 2019? Here Are 13 Examples appeared first on AI-Powered End-to-End Testing | Applitools.

]]>
https://app14743.cloudwayssites.com/blog/examples-software-ui-bugs/feed/ 0