FeaturedIT topics

IDG Contributor Network: The future of load testing is BLU

In 1999, the world was a very different place. Less than 4% of the world population was using the internet—largely over dial-up modems. Google was just coming out of beta, sites like Ask Jeeves, Alta Vista, Lycos, and AOL dominated the search engine landscape, people still purchased VHS tapes, and everyone was worried that the world might come to an end on January 1, 2000. Web sites looked like this and state of the art internet could get you up to 1.25 mbps. Slow performance was as ubiquitous as the modem beep—but if someone wanted to load test, they could do so using a protocol-level load testing tool like the newly-released JMeter or LoadRunner.

Today, the web (and the world) have changed a lot. But load testing? Not so much. The practice is still dominated by protocol-level testing with JMeter and LoadRunner—the same approach and tools used in 1999. Unfortunately, this 1999-style load testing is extremely difficult to apply to today’s web apps, which are highly complex, componentized, and JavaScript-heavy. And now, the stakes are much higher. Performance issues don’t just delay the loading of your company’s rudimentary web site. They slow your business—its ability to process transactions, attract and retain customers, and out-innovate your competition. In fact, a recent study found that nearly half (48%) of businesses reported that performance issues were directly hampering the success of their digital transformation initiatives.

In the very immediate future, the ability to strategically pivot becomes imperative—for the digital business constantly facing disruption, and for the load testing efforts that are crucial for their ultimate success. For load testing, the future is bright…and BLU.

BLU stands for browser-level users. To understand how it dramatically simplifies load testing, it’s important to first recognize why traditional approaches to load testing are so cumbersome to apply today.

Protocol-based approaches are brittle and time-consuming for agile teams

Today’s agile developers and testers don’t have the time (or desire) to wrestle with all the technical details required to get load tests working correctly and to keep brittle load tests in sync with rapidly evolving applications.

The traditional way of approaching load testing is by scripting at the protocol level (e.g., HTTP). This includes load testing with open source tools such as JMeter and Gatling, as well as legacy commercial tools such as LoadRunner. Although simulating load at the protocol level has the advantage of being able to generate large concurrent load from a single resource, that power comes at a cost. The learning curve is steep, and the complexity is easily underestimated.

The main culprit for this complexity is JavaScript. In 2011, there was usually less than 100 KB of JavaScript per page, which spurred around 50 or fewer HTTP requests. Now, that’s doubled: We see on average 200 KB of JavaScript per page, and this gives us more than 100 requests per page.

For example, just one click on an Amazon.com page triggers something like 163 HTTP requests processed asynchronously after page load. You also find things such as dynamic parsing and execution of JavaScript, the browser cache being seeded with static assets and calls to content delivery networks. And the next time the same element is clicked, it might generate 161 requests…or 164…or 165. There will be small differences each time.

When you start building your load test simulation model, this will quickly translate into thousands of protocol-level requests that you need to faithfully record and then manipulate into a working script. You must review the request and response data, perform some cleanup and extract relevant information to realistically simulate user interactions at a business level. You can’t just think like a user; you also must think like the browser.

You need to consider all the other functions that the browser is automatically handling for you, and figure out how you’re going to compensate for that in your load test script. Session handling, cookie header management, authentication, caching, dynamic script parsing and execution, taking information from a response and using it in future requests … all of this needs to be handled by your workload model and script if you want to successfully generate realistic load. Basically, you become responsible for doing whatever is needed to fill the gap between the technical and business level. This requires both time and technical specialization.

Pivoting to a browser-level approach (BLU)

To sum up the challenge here: modern web applications are increasingly difficult to simulate at the protocol level. This raises the question: Why not shift from the protocol level to the browser level—especially if the user’s experience via the browser is what you ultimately want to measure and improve in order to advance the business’ digital transformation initiatives?

When you’re working at the browser level, one business action translates to maybe two automation commands in a browser as compared to tens, if not hundreds, of requests at the protocol level. Browser-level functions such as cache, cookie and authentication/session management work without intervention.

There are a number of ways to simulate traffic at the browser-level: Selenium is currently the most popular, but there are a number of cross-browser testing tools available—some of which let you test without getting into scripting.

However, historically, it just wasn’t feasible to run these tools at the scale needed for load testing. In 2011, if you wanted to launch 50,000 browsers with Selenium, you would have needed around 25,000 servers to provide the infrastructure. Moreover, it would have been prohibitively expensive and time-consuming to provision the necessary infrastructure.

Today, with the prominent availability of the cloud and containers, the concept of browser-based load testing is finally feasible. Suddenly, generating a load of 50,000 browsers is a lot more achievable—especially when the cloud can now give you access to thousands of load generators that can be up and running in minutes. Instead of having to wait for expensive performance test labs to get approved and set up, you can get going instantly at an infrastructure cost of just cents per hour. Instead of wrestling with 163 HTTP requests to test a simple user action, you just simulate one browser-level click—which is obviously much easier to define and maintain.  Consider the number of clicks and actions in your average user transaction, and the time/effort savings add up rather quickly.

Fast feedback on performance is no longer just a pipe dream.

You can use open source technology like Flood Element to capture the action in a simple, easily maintainable script. Or, if you prefer a “low-code/no-code” approach, you can capture your test scenarios as scriptless tests, then use those same tests to drive both load testing and functional testing.

By reducing the complexity traditionally associated with load testing, BLU load testing gives developers and testers a fast, feasible way to get immediate feedback on how code changes impact performance. It’s designed to help people who are not professional performance testers quickly create load tests that can be run continuously within a CI/CD process—with minimal maintenance.   

With this new “lean” approach to load testing, you can modernize your load testing—just like you’ve modernized your development processes, your application stacks, and your dial-up AOL internet access.

This article is published as part of the IDG Contributor Network. Want to Join?

Related Articles

Back to top button