Blog

Reducing UI Bugs with Automated Visual Testing

Current software development emphasizes the delivery of value and quality to customers as quickly as possible. However, in order to accomplish this, quality must be ensured on many levels. One of these levels comes in the form of visual testing, or how the look and feel of the application is, taking into consideration factors such as multiple browsers, devices, and operating systems. For many years, this process has been done manually, something reflected in the fact that most popular web automation testing tools focus on the end users functional behavior. It is quite difficult, though, for average tools to determine the position of UI elements and whether they diverge from expected parameters. Visual bugs are everywhere, and QA is constantly catching errors related to the way an application should look visually, even if we are not actually on the hunt for them. For many years, manual testing has been the approach to discover such differences or errors.

Today, I want to introduce automated visual testing as a way to replace manual QA efforts. A few tools have surfaced in recent years to accomplish such a feat, and I’ll be showing an example of one and how to integrate it with your existing automated test suite, add a new layer of tests, and increase your quality even more in the process.

For the following example, I will be using a third-party tool called Applitools. Applitools will provide us with an SDK to enhance our existing business-oriented validations by adding an extra layer of visual validations and will also leverage an easy-to-use dashboard for image comparison against a baseline; more on that in a minute. Let’s begin by setting up an account with Applitools by following these steps:

1. First, create an Applitools account via the following URL: https://applitools.com/users/register

2. Once the account is created, navigate to the top-right menu bar, open it up, and click on “My API key.” There should be a unique API key generated that you can use for your AI-powered visual checks. Store this somewhere safe and secure as we will use it later. A copy of this key should have been sent to your email address upon registration, as well.

Once we have set up the account, we can dive into adding visual validations. For this specific example, I will be creating a very simple Java project with Gradle as a build tool/dependency manager and Selenium to execute a few web-related tests:

Project Structure

Project structure

 

Gradle Dependencies

dependencies {   testCompile group: 'org.testng', name: 'testng', version: '6.14.3'   compile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '3.141.59'   compile group: 'io.github.bonigarcia', name: 'webdrivermanager', version: '3.4.0' }

 

Initial Test

public class LoginTests extends BaseTests {   LoginPageObject loginPageObject;   Properties props;   public LoginTests() {       loginPageObject = new LoginPageObject();   }   @BeforeMethod()   public void loginSetup() throws Throwable {       driver.manage().window().maximize();       props = System.getProperties();       props.load(new FileInputStream("resources/test.properties"));       driver.get("https://www.amazon.com/");   }   @Test(description = "Successful Login scenario ")   public void successfulLogin() throws Throwable {       try {           loginPageObject.navigateToSignIn();           loginPageObject.enterUsername(System.getProperty("username"));           loginPageObject.enterPassword(System.getProperty("password"));           loginPageObject.clickLogin();           Assert.assertTrue(loginPageObject.validateAddressIsDisplayed());       } catch (Exception ex) {           Assert.fail("Test execution failed with following message " + ex.getMessage());       }   }   @AfterMethod   public void loginCleanup() {       driver.quit();   } }

 

As you can see, we have a very simple test that opens up Amazon, navigates to the login page, uses some credentials to sign in, and then finishes with a functional validation. This test works and is valid. However, our assertions could be way more powerful. Right now, we are simply verifying if the address associated with the account is displayed as part of the dashboard info. That is fine, but are we sure that the element or elements present are positioned correctly? What about our window? Are there any elements out of place? With visual testing, we can add checks to answer these questions. Let’s continue with integrating the Applitools SDK into our existing code. This only requires a couple of steps:

compile group: 'com.applitools', name: 'eyes-selenium-java3', version: '3.150.1' Eyes eyes; private void setupEyes() {   eyes = new Eyes();   eyes.setApiKey(System.getProperty("applitoolsKey")); } private void validateWindow() {   eyes.open(driver, "Visual Test", "Visual Test Amazon");   eyes.checkWindow();   eyes.close(); }

 

This is where the magic happens. “Eyes” is essentially an AI-driven mechanism that allows you to tell your tests what visual elements to be on the lookout for, giving you a way to know if there are any UI or visual deviations from the expected layout. “Eyes” must have a unique API key set that will allow future validations to be compared in unique personal or company dashboards with the Applitools site. We saw this key when setting up our Applitools account. A very crucial advantage of having a unique key is that it is completely possible to add different keys per environment, so we could potentially have a key for a production site and alternatively execute the same validations against a sandbox account for lower environments. We also pass along the web driver, a unique app name and unique test name as parameters.

Now, let’s add our first visual check. Let’s call it the checkWindow function, which will check the status of the screen or window after our existing functional check and subsequently execute the test:

@Test(description = "Successful Login scenario ") public void successfulLogin() throws Throwable {   try {       loginPageObject.navigateToSignIn();       loginPageObject.enterUsername(System.getProperty("username"));       loginPageObject.enterPassword(System.getProperty("password"));       loginPageObject.clickLogin();       Assert.assertTrue(loginPageObject.validateAddressIsDisplayed());       validateWindow();   } catch (Exception ex) {       Assert.fail("Test execution failed with following message " + ex.getMessage());   } }

 

Visual test 1

After we execute our test for the first time, we create a baseline image. This baseline image will be used to visually compare it in future executions. Since this is the first execution, the test will pass. Now, here is where the fun begins. If we execute our same test for a second time, we are now comparing our test against our first defined image. We can see the results of this comparison by simply navigating to our dashboard:

Visual test 2

As you can see, the test has failed in TestNG. If we look closely at the console error, it states: “Test ‘Visual Test Amazon’ of ‘Visual Test’ detected differences! See details at:…”. But why has this test failed? Amazon is a page that has a lot of dynamic content with visual changes. In this case, the image in the carousel on the upper portion of the dashboard was different when checking the window.

Alright, we have done it! We have been able to visually validate deviations from one base image to another. But is that really all we can do? The answer is no: this barely scratches the surface of it. Was this the outcome we were expecting? Was this really a failure or just a dynamic content difference? What about colors; should we ignore color changes? Thankfully, we have match levels, or ways to set a “visual threshold” in which we automatically ignore content changes or colors in order to focus, for example, on layout instead. Let’s take a quick peek at this:

private void validateWindow() {   eyes.open(driver, "Visual Test", "Visual Test Amazon");   eyes.setMatchLevel(MatchLevel.LAYOUT);   eyes.checkWindow();   eyes.close(); }

 

Passed test

As seen above, we set our match level on a “Layout” level (“Strict” is set by default), and now the test has passed. Even though there are content differences, the layout remains the same; there is no need for the AI to report any visual errors. Further exploration is also available via the dashboard by comparing, zooming in and zooming out, and passing and failing test cases manually. The dashboard provides a great and centralized place to view potential differences.

In addition to checking the screen as a whole and comparing, we also have other types of UI-related verifications. For example, we can easily verify an individual element or a specific frame, instead of the whole screen, and integrate with other testing frameworks, not just Selenium. We are also able to use several continuous deployment tools. For more information, there is in-depth documentation available.

Without going into further detail—and in order to leave some room for further exploration—I want to wrap up by saying visual checks are a great addition to any and all existing web-automated tests. Adding AI-driven visual checks to any existing automated framework provides an extra layer of previously unexplored verifications that were not possible before. It reduces the risk of errors by adding an automated tool and replaces manual verifications, which have been the traditional method for visual validation until now.

Ready to be Unstoppable? Partner with Gorilla Logic, and you can be.

TALK TO OUR SALES TEAM