Let’s face it: Businesses don’t want—or need—perfect software. They do want to deliver new, business-differentiating software as soon as possible. To enable this, you need fast feedback on whether the latest innovations will work as expected or crash and burn in production. You also need to know if these changes somehow broke the core functionality that the customer base—and thus the business—depends upon.
This is where continuous testing comes in.
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.
Test automation is essential for continuous testing, but it’s not sufficient. Test automation is designed to produce a set of pass/fail data points correlated to user stories or application requirements. Continuous testing, on the other hand, focuses on business risk and providing insight on whether the software can be released. Beyond test automation, continuous testing also involves practices such as aligning testing with your business risk, applying service virtualization and stateful test data management to stabilize testing for continuous integration, and performing exploratory testing to expose “big block” issues early in each iteration. It’s not simply a matter of more tools or different tools. It requires a deeper transformation across people and processes as well as technologies.
Continuous testing has undeniably become imperative—especially now that 97 percent of organizations have adopted agile and 71 percent are practicing or adopting devops (according to Sauce Labs). New Forrester research confirms that continuous testing is one of the key factors separating agile/devops leaders from agile/devops laggards. Nevertheless, most enterprises still don’t have a mature, sustainable continuous testing process in place. Forrester found that even the organizations actively practicing agile and devops have a relatively low continuous testing adoption rate: 26 percent.
Many organizations have experimented with test automation; typically, automating some UI tests and integrating their execution into the continuous integration process. They achieve and celebrate small victories, but the process doesn’t expand. In fact, it decays. Why? Typically, it boils down to roadblocks that fall into the following three categories:
- Time and resources
- Complexity
- Results
Time and resources
Teams severely underestimate the time and resources required for sustainable test automation. Yes, getting some basic UI tests to run automatically is a great start. However, you also need to plan for the time and resources required to:
- Keep notoriously brittle tests scripts from overwhelming the team with false positives.
- Create tests for every new or modified requirement (or determine where to focus your efforts and what you can skip).
- Establish a test framework that supports reuse and data-driven testing—both of which are essential for making automation sustainable over the long term.
- Keep individual tests and the broader test framework in sync with the constantly evolving application.
- Execute the test suite—especially if you’re trying to frequently run a large, UI-heavy test suite.
- Determine how to automate more advanced use cases and keep them running consistently in a continuous testing environment (see the next section for more on this).
- Review and interpret test the mounting volume of test results (more on this later too).
With agile and devops, time for test creation, maintenance, execution, and analysis is extremely limited—but fast feedback is essential. How can you ensure that the most important things are sufficiently tested without delaying time to market?
Complexity
It’s one thing to automate a test for a simple “create” action in a web application (e.g., create a new account and complete a simple transaction from scratch). It’s another to automate the most business-critical transactions, which typically pass through multiple technologies (SAP, APIs, mobile interfaces, and even mainframes) and require sophisticated set up and orchestration. You need to ensure that:
- Your testing resources understand how to automate tests across all the different technologies and connect data and results from one technology to another.
- You have the stateful, secure, and compliant test data required to set up a realistic test as well as drive the test through a complex series of steps—each and every time the test is executed.
- You have reliable, continuous, and cost-effective access to all the dependent systems that are required for your tests—including APIs and third-party applications that may be unstable, evolving, or accessible only at limited times.
Moreover, you also need a systematic way to flush out the critical defects that can only be found with a person evaluating application from an user perspective. Automation is great at rapidly and repeatedly checking whether certain actions continue to produce the expected results, but it can’t uncover the complex usability issues that significantly impact the user experience.
Without fast, reliable feedback on how application changes impact the core user experience, how do you know if a release will help the business or harm it?
Results
The most commonly cited complaint with test results is the overwhelming number of false positives that need to be reviewed and addressed. When you’re just starting with test automation, it might be feasible to handle the false positives. However, as your test suite grows and your test frequency increases, addressing false positives quickly becomes an insurmountable task. Ultimately, many teams either start ignoring the false positives (which erodes trust in the test results and continuous testing initiative) or giving up on test automation altogether.
When devops and continuous delivery initiatives come into play, another critical issue with results emerges: they don’t provide the risk-based insight needed to make a fast go/go-no decision. If you’ve ever looked at test results, you’ve probably seen something like this:
- 42,278 tests passed.
- 10,086 tests failed.
- 910 tests did not execute.
What does this really tell you? You can see that:
- There’s a total of 53,274 tests cases.
- Almost 80 percent of those tests passed.
- Over 19 percent of them failed.
- About 1 percent did not execute.
But, would you be willing to make a release decision based on these results? Maybe the test failures are related to some trivial functionality. Maybe they are the most critical functionality: the engine of your system. Or, maybe your most critical functionality was not even thoroughly tested. Trying to track down this information would require tons of manual investigative work that yields delayed, often-inaccurate answers.
In the era of agile and devops, release decisions need to be made rapidly—even automatically and instantaneously. Test results that focus on the number of test cases leave you with a huge blind spot that becomes absolutely critical, and incredibly dangerous, when you’re moving at the speed of agile and devops.
If your results don’t indicate how much of your business-critical functionality is tested and working, you can’t rely on them to drive automated release gates. Manual review and assessment will be required, and that’s inevitably going to delay each and every delivery.
This article is published as part of the IDG Contributor Network. Want to Join?