6/02/14

Testing out testing

The only way to know if a web site change is going to produce results is to test it. Putting a testing program in place doesn’t have to be complicated or costly, but it does require effort.

Zak Stambor , Managing Editor

You never know what changes on a web site will boost sales, says Jeff Hannan, Wayfair LLC's senior manager, site testing and customer analytics.

The retailer's A/B and multivariate tests have found that some simple tweaks, like changing the order that information is listed on Wayfair.com's product pages, have produced big results. Other more pronounced changes, like a complete page redesign that the retailer had high hopes for, failed to produce gains, and in some cases those sorts of changes led to a drop in the site's conversion rate, he says.

That's why Wayfair uses technology from vendor SiteSpect Inc. to test nearly every change it makes to its site, which means it runs between 15 and 25 different A/B and multivariate tests a month. SiteSpect technology starts at $4,500 a month. "Letting the data drive our decisions helps us make better decisions," Hannan says.

That's particularly important at the home furnishings retailer because few of Wayfair's team members are its target customers, he says. But even if its staff was made up of people like the consumers who shop its sites, the retailer still couldn't be certain that one person or a handful of people's responses would reflect most consumers' opinions and behaviors, he says.

While most retailers, particularly smaller merchants, don't have the manpower or site traffic to run as many tests as often as Wayfair, the retailer's approach can serve as a lesson in how testing enables a merchant to let data determine which changes to make. And even merchants that run fewer tests than Wayfair similarly say that testing has helped them make their sites more user-friendly and helped boost their sales. That helps explains why 252 of the largest North American online retailers report using a testing vendor, according to Internet Retailer's Top500Guide.com.

For Wayfair, testing has proven lucrative. The retailer's data-driven approach has helped it grow its online sales 265% in the past five years—from $251 million in 2009 to $915 million in 2013. The reason is simple, says Hannan: "Testing ultimately makes for a better site."

Web site testing doesn't have to be complicated or costly. There are a range of tools retailers can try for free and even once they have to pay, low-traffic sites can use tools for as little as $20 to $400 a month, says Anne Marie Dono, chief operating officer and managing partner at the digital marketing consultancy Multiplica. Tools for sites with more traffic typically start at around $5,000 a month.

And there are also simple tests—such as those that examine whether more shoppers buy after seeing larger images or less copy in an e-mail—that retailers can do simply by segmenting their e-mail lists, says Gary Rush, chief technology officer at street wear e-retailer Karmaloop Inc. By splitting Karmaloop's e-mail database in two and sending each half a different promo code, the retailer, using Google Analytics, can easily track which content led shoppers to click and whether those clicks resulted in sales.

Celebrity Cruises Inc. recently took a similar approach by setting up two different customer service numbers to see whether a change in its booking process would lead to fewer consumers calling its contact center. The approach was simple, yet effective, says Tina Alexander, the company's associate vice president of digital and web marketing.

The test examined what information was necessary on its product detail pages, which list a cruise's itinerary. By tracking the call volume of both numbers, Celebrity Cruises determined that more prominently displaying information about shore and land excursions on the pages reduced its call volume and increased the number of travelers booking trips online. Celebrity Cruises declined to share the specific percentage changes.

But before a retailer begins a test—with the help of a vendor or on its own—it first has to get a firm grasp of what it wants to test, Alexander says. That's more complicated than it may sound, she says, because a web site like CelebrityCruises.com has so many elements that there are nearly endless changes that it could test.

"There are so many opportunities it can be challenging to figure out where to start," she says.

Karmaloop decides what to test by holding a weekly meeting during which five or six staff members from across the company's divisions—marketing, product management and information technology—sort through their options. The team expedites those tests that are likely to have the greatest impact on sales, and it determines the best way to craft the tests so that they can produce meaningful results.

The retailer began having the meetings a little more than a year ago to help it clean up testing processes that lacked any real organization. "We weren't always aware of what was going on," Rush says of Karmaloop's former methods. "We had too many moving parts. Marketing might have been testing one thing, product management was testing another and neither knew about the other group's test."

That led to confusion and results that weren't as clear as they appeared to be. "When you're testing you want to keep the number of moving parts to a minimum so you know that the element that you're testing is actually producing the results you're seeing," Rush says.

The Karmaloop team in the meetings also determines which of its two testing tools to use. It more often uses Optimizely, which offers testing plans that start at $17 a month, for more technical tests, and Monetate, which Rush says is two to three times more expensive than Optimizely, for marketing-related tests. (Monetate did not respond to requests about its pricing.) Monetate builds the tests for the retailer while Optimizely is more of a do-it-yourself tool that lets the retailer's information technology team handle changes.

Testing is an ongoing process that requires merchants to invest time to understand the results, even when the actual software isn't necessarily costly, says Alexander of Celebrity Cruises.

"It requires stewardship," she says, which is why she recently hired someone to coordinate the company's various tests and interpret results. "A lot of companies don't want to make that type of investment. But after you start to test and see the quick wins that putting a program in place can produce, it makes the costs easier to digest."

For example, when Celebrity Cruises launched its testing program about a year and a half ago it used testing to figure out how to improve the pages on its site that had the highest abandonment rates. Running simple A/B tests found that consumers were more likely to click and buy on a page with larger images, while another test found that a larger, clearer Book Now button led to significant gains. Those proof points helped Alexander convince the cruise line to invest in testing.

Now that Celebrity Cruises has someone onboard to help oversee its testing program, it can continue chipping away at finding better solutions to the various hiccups consumers encounter in their path to purchase, she says.

Wayfair's aggressive testing program similarly tries to find potential solutions to the areas on its sites that cause friction. Because the retailer has Hannan overseeing its testing program, it can run a number of tests on various areas of the retailer's sites—it has international sites and other properties like upscale home goods brand Joss & Main—without its executives losing focus of their other responsibilities.

"So much of testing is examining why something wins and something else loses," Hannan says. Because Hannan examines all the retailer's tests he sees patterns in shoppers' behaviors, which helps him zero in on what changes the retailer might want to emphasize as it continues to revamp and reorganize its sites.

Part of Hannan's job also involves putting testing in perspective. One of the biggest challenges to testing site changes is realizing that most tests aren't winners, he says. "The nature of site changes is that most tests have neutral or losing results," he says. "When we don't have a winner that means we have to regroup and push forward."

Wayfair typically runs a few small "lower effort" tests, such as changes to a page's wording or how an item's list price looks, along with a handful of larger redesign-type efforts. The mix helps increase the odds that the retailer will see gains from its efforts, Hannan says.

But even when a merchant's winning percentage is lower than usual, those results are also valuable, Alexander says. "Losses can reinforce the reason you initially did something," she says.

And testing helps a retailer make better decisions, says Karmaloop's Rush. It provides justification for making a change that might otherwise seem counterintuitive. For instance, Karmaloop recently ran a test that removed the Checkout button from its navigation. That meant that rather than allowing a shopper to click a single Checkout button on a product page, he had to click to the shopping bag before he clicked to checkout. Requiring shoppers to take a longer path to checkout isn't something that most merchants would put in place, Rush says. But after putting it to the test it produced an 8.5% jump in the site's checkout rate.

"You never know what you'll find," he says.

That's the value of testing. While retailers can use best practices or rely on hunches to make decisions, they won't actually know how shoppers will respond without letting shoppers' actions speak for themselves.

zak@verticalwebmedia.com

@ZakStamborIR

 


How to get started with testing

E-retail executives recommend taking these five steps to get an effective testing program off the ground.

1. Find a vendor or vendors. Retailers have to figure out whether they want an inexpensive do-it-yourself-like option or a service that helps them build and interpret the results.

2. Start small. Find some "quick wins" that demonstrate the value of testing to prove the program's value to upper management executives.

3. Get together. Bring together the various interested parties to determine what elements different divisions want to test and to prioritize which tests should come first.

4. Designate a testing czar. Having one person or a small group who oversee the testing program—either as a full-time job or as a part of their jobs—helps keep the testing program organized.

5. Be prepared to lose. Most tests produce neutral or losing results. Retailers should be ready to learn from their losing tests.


FEATURED RESOURCE PROVIDERS OF WEB SITE TESTING

LucidView (www.lucidview.com)
LucidView = In-Market Testing. Guidance in multivariate testing and best practices in digital, retail, CRM, direct mail, and omnichannel programs. Accelerate learning with LucidView testing.

Rackspace Digital (www.rackspace.com)
As the #1 hosting provider for the Internet Retailer Top 1,000, Rackspace Digital gives you the performance, expertise and reliability you need to power your digital strategy. Learn more.

Trinity Insight (www.trinityinsight.com)
Trinity Insight enables retailers to grow faster through optimization.  We help retailers innovate and maximize opportunities within organic/paid search, multivariate testing, user segmentation, data-feed marketing, and mobile commerce.

Topics:

A/B testing, Monetate, multivariate testing, sitespect inc., Wayfair.com, web site testing

NEWS CATEGORIES Back to Top...