AI and Machine Learning: how are they changing the mobile testing landscape?

By incorporating AI and machine learning into mobile testing tools, teams can become more efficient in test automation. In this article, we'll look at how the adoption of AI and machine learning will improve these tools and what the future of testing might look like.

Test automation has allowed mobile development teams to save time and resources spent on testing, without sacrificing quality. According to InfoWorld's survey, teams that automate at least 50% of their tests saw an 88% increase in testing cycle speed, a 71% increase in test coverage, and a 68% increase in the ability to catch bugs earlier.

However, if the ultimate goal of test automation is to reduce the amount of time spent on the tedious tasks associated with testing, then we need these testing tools to evolve some more. By leveraging AI and machine learning, mobile testing tools can provide more efficiency for development teams in the creation and analysis phases of testing, too. In this article, we'll look at how AI and ML will improve mobile testing tools and what the future of mobile testing might look like.

“AI for software testing is now a ‘thing’. Some want to ignore it, some embrace it, but it is no longer just a curiosity and it can't be ignored. In 2018, there were a few brave souls applying AI to their testing problems. In 2019, there are now multi-person test teams at the largest corporations turning to internally developed AI solutions, or applying vendor-based solutions. 2019 was the year to educate yourself on the basics of AI — if you don't at least have an intelligent-sounding opinion on AI for testing, you are falling behind as a leader and technologist.” ~ Jason Arbon, CEO of test.ai

The present: how is AI making mobile testing tools more efficient?

The use of AI in the mobile testing sphere is still in its infancy because the level of autonomy is much lower than the more evolved areas, such as image recognition, natural language processing, and voice-assisted control. However, we're already seeing new mobile testing tools that have adopted AI provide easier test creation, simpler test analysis, and reduced test maintenance.

Source: Leveraging AI and ML to Automate Your Testing

Improved element location

Traditional testing tools use selectors (i.e., DOM element IDs) to determine what elements to interact with. But these selectors are very fragile as they can be changed as the application code evolves. When this happens, your test will fail, increasing your maintenance burden.

Current AI-powered mobile testing tools employ “Visual Locators” to eliminate the need for these fragile selectors and provide a more robust way of targeting elements. Instead of hard-coded selectors, visual locators identify elements based on their visual appearance. This way, even if the selectors for elements change, your tests will still work.

Some modern mobile testing tools that currently use this feature are:

  • Applitools: Applitools uses its suite of Visual AI algorithms to identify visual elements.
  • Test.ai: Test.ai uses a reinforcement learning model called Q-Learning, in which bots learn how to navigate your app and recognize elements.
  • Katalon: Katalon uses a Visual Object Recognition model to recognize elements.

Self-healing tests

One of the major frustrations associated with test automation is false positives, i.e., when a test case fails but there is no bug, and the functionality that's tested is actually working correctly. False positives make automation tests less reliable, contribute to maintenance costs, and may consume a lot of a tester's time chasing down bugs that don't exist.

Because traditional mobile testing tools depend on an underlying application structure or model to test functionality, false positives are likely to occur frequently as the structure (not functionality or presentation) changes due to flaky element identifiers or new OS versions.

Thanks to AI, some mobile testing tools now possess self-healing abilities. These tools can automatically identify the change made to an element locator (ID) or a screen/flow that was added between predefined test automation steps and quickly fix them on the fly or alert and suggest a quick fix to the developers. This way, tests run more smoothly and require less intervention from developers and testers.

While the implementation of self-healing is still limited in mobile testing tools, here's a tool that currently employs this mechanism:

  • QMetry: QMetry has the QMetry Automation Studio (QAS), which uses self-healing to evaluate the test status, identify problems, analyze the situation, and automatically suggest a solution. All a user has to do is either approve or deal with the suggested fixes.

Visual validation

Visual testing is the process of evaluating the visual interface of an application against the results expected by design, in order to detect inconsistencies in appearance (i.e., visual bugs) before they get to production. It answers questions like: Is an element the right color and in the right position? Are there any overlapping elements? This is a very important type of testing because visual bugs occur more frequently than we want to admit — even for companies like Amazon.

Traditional testing tools use the underlying structure of an application's DOM to determine the status and presence of visual elements. To test visual interfaces this way, you'd have to write thousands of assertion code lines to cater to the different variations of OS, browser, screen orientation, screen size, or font size. This method is highly inefficient, brittle, and just contributes to the maintenance burden.

Modern AI-enabled testing tools use “Visual Validation” features to effectively uncover visual inconsistencies by comparing baseline snapshots of app screens with the corresponding current screen on every regression run. These tools can identify elements in rendered screens and compare them to snapshots of the same screens in their expected state using a class of artificial intelligence algorithms called computer vision.

Some modern mobile testing tools that currently support this feature are:

  • Applitools: Applitools uses its visual testing API called Eyes to compare snapshots.
  • Kobiton: Kobiton uses its NOVA AI Engine to compare snapshots and propose UX optimizations.
Source: Applitools Extends Visual AI Testing to Native Mobile Apps

Improved scriptless or codeless test automation

Traditional mobile test automation tools require you to write code in order to automate test scenarios. This presents a challenge because manual testers don’t always have enough technical know-how to write the test scripts. For example, a Perfecto article revealed that over 40% of their customers were unable to achieve automated testing due to test automation scripting issues. Writing tests is time-consuming and adds to maintenance costs since test maintenance is directly related to the number of lines of code — thus, the need for codeless solutions.

When codeless testing tools were first introduced to the testing market, they used a method called “record and playback.” With record and playback, a tester manually records a user flow using the testing tool. Then the tool imitates the tester's actions to test the tool. While this method simplified testing, it made maintenance difficult and time-consuming. There was no easy way to update the recording if the user flow changed slightly, so the test had to be re-recorded.

By incorporating AI, modern mobile test automation tools have evolved from the prominent record and playback method to further simplify the process of creating and maintaining tests. Using methods such as natural language processing (NLP), where you can use plain English-like statements to create test cases, these tools make it easier to create, update, and maintain tests.

Examples of such mobile testing tools are:

  • Test.ai: Test.ai uses AI bots to try to understand how a user would use a mobile app and then tests it accordingly. It is mostly focused on user experience testing.
  • Testsigma: Using NLP, Testsigma allows you to create test cases by writing plain English statements. These simple statements are then converted within the tool for execution.
  • ACCELQ: ACCELQ also uses NLP to facilitate test case creation in plain English statements.

The future: how can AI and machine learning further improve mobile testing tools?

While the use of AI and machine learning in mobile testing has made significant headway, there's still more to be done.

Automated and intelligent gap analysis

Gap analysis in software testing is the process of identifying untested new code within an application. As modern mobile apps become increasingly complex, it's easy to miss testing some application flows. This is okay if those flows are not frequently used. But what happens when users suddenly start using those flows and bugs start to spring up? Unsurprisingly, untested code is far more likely to contain bugs than tested code.

Currently, performing a gap analysis necessitates a combination of a static analysis of all code changes and a dynamic analysis of all current testing. This is effort-intensive and time-consuming. With AI, future mobile testing tools can learn how users are interacting with your app. If the system sees lots of people taking an untested user journey, it can alert you that there's a gap in your testing, so you can take care of it. These tools will also be able to highlight which areas you should focus your testing resources on based on user usage intensity. This way, teams will be able to optimize test coverage, even with limited resources.

Automated test generation

Existing mobile test automation tools only deal with automating the execution of tests. But simply automating test execution is not enough when it takes hours, weeks, or even months — depending on the complexity of the apps being tested — to create these tests. While some progress has been made in easing the creation of UI (application flow) tests, not much has been made in the areas of unit testing and API testing.

As AI technology continues to advance, we'll hopefully get to a place where unit and API test generation is very mature and mainstream, so developers and testers alike can focus on innovation and creativity.

Upgrading your mobile testing with AI-based tools and CI/CD made for mobile development

It's clear that the application of AI in mobile testing tools is here to stay and will only improve. If you haven't already, adopting such tools will help simplify and ease the complexities of testing for your team. However, that's not all you need to reach your full potential with test automation. Alongside modern mobile testing tools, you also need to upgrade from traditional CI/CD solutions to a mobile-specific CI/CD solution like Bitrise that evolves along with the mobile development landscape.

Source: Saving time and innovating with Bitrise — a case study with November Five

Take November Five, for example. Before moving to Bitrise, they made use of locally-hosted Jenkins server for CI, but the solution proved to be less than optimal. Not only did they need a separate room to host the server, but manually keeping the stacks up to date required at least one or two dedicated engineers as not everyone was familiar with the setup. They moved to Bitrise and have since enjoyed ease and enormous time savings.

As Thomas Van Sundert, CTO & co-founder of November Five, said: "We can sum up Bitrise's impact on our mobile development processes in three words: productivity, quality, and security. For us, it's more than just an ultimate time saver."

To learn more about how Bitrise can help your team save time and improve productivity, quality, and security, book a demo with us today!

Get Started for free

Start building now, choose a plan later.

Sign Up

Get started for free

Start building now, choose a plan later.