In the September issue of Internet Retailer, we ran Part 1 of a two part series on e-retail web site development. Part 1 focused on the importance of establishing reliable, precise metrics for a web site. The goal was to make sure you have the information needed so you can prioritize enhancements and improvements in the first quarter after holiday shopping ends.
Once you’ve measured and established a baseline and benchmarking, you’re ready to implement the changes that will make your web site great.
Now you need to develop a plan of action that begins with creating a strong business case for prioritized improvements that will drive your online sales, and proceeds through making changes while carefully monitoring the results from the customers’ perspective and concludes with measuring the success of your improvements and web site management team over time.
Part 2: How should you do it?
Impulse: Defend your plans
Action: Make a strong business case for site improvements
If the metrics you have in place are scientific, accurate, and are the basis for action, you should be able to make a compelling business case for the changes you want to make. Although expert opinions and user surveys can be misleading or present an incomplete picture (as was discussed in Part 1), the mix of transactional, clickstream, and customer satisfaction data that you have at your disposal should give you clear insight.
You’ve measured your baseline, you’ve benchmarked against competitors and industry leaders, and you’ve used a scientific method to listen to what your customers really want. Customer-centric metrics give you the ammunition you need to persuade web site stakeholders in your organization to proceed.
For example, Danskin was able to use the information gained from its customer satisfaction metrics to justify a site redesign. It had already planned to upgrade merchandise images but had to project the ROI for this expensive enhancement to get authorization to invest in new photography and design. Danskin was able to use its customer satisfaction metrics to quantify the impact that improved design and product information would have on overall customer satisfaction and likelihood to purchase online.
Satisfaction scores for product browsing, the usefulness of product images and thoroughness of product descriptions were quite low while these elements had a high degree of impact on overall satisfaction and the likelihood to purchase online, to recommend the site and to return. This cause-and-effect correlation said the company could expect gains in online purchases, return visits and referrals from improving these areas with a new web site design and product photography. Once Danskin made the changes, customer satisfaction scores went up considerably, followed closely by a surge in online sales.
Impulse: Fix the web site
Action: Start with changes that matter most to customers and influence their behavior
After you’ve identified the high priority improvements-from your customers’ perspective-you can be more effective in taking next steps:
- Select the best internal or external partner based on customer needs and expectations.
- Manage scope creep.
- Execute the changes more efficiently.
Your metrics may indicate that you need a complete redesign. But even minor, incremental changes can have a great impact on improving the customer experience on your web site. The improvements may be as simple as addressing your inventory mix, improving vendor management, or placing highly-sought after information in a more intuitive, easy-to-find place. Don’t let partners or consultants talk you into reinventing the wheel or making unnecessary changes that aren’t backed up by reliable, scientific voice-of-customer input.
Action: Use a combination of tools to assess how effective your changes will be before you go live
There are several good ways to test, depending on the dynamics of your web site and the extent of your resources. Ideally you’d have a few things in place:
- Usability studies: Be sure to measure users who represent key audience segments on your web site. All users are not alike in their needs, expectations and perceptions, so you’ll want to represent key audiences in usability testing.
- A/B Testing: A critical success factor is using appropriate, consistent metrics to measure the success of the test site for a reliable apples-to-apples comparison. Use all your metrics so that you can compare your A site, your B site, and your baseline measurement in context. Most purchases are the result of multiple site visits, so you need to focus not only on the short- term metrics (transactions, conversion, etc.) but also on the long-term metrics (customer satisfaction) to truly understand the impact of site changes and to give you the whole picture of how customers are experiencing the updated site.
- Be flexible: Tweak your changes according to the results of your testing.
One of our clients, a national apparel retailer, found out through careful metrics monitoring that it needed to fix navigation. Although metrics diagnose the problem, they don’t tell how to fix it. This retailer addressed the navigation issue with site architecture changes, but made it worse instead of better, as was reflected in declining satisfaction scores for navigation and in overall satisfaction. Because they kept the same metrics in place throughout the process, managers were able to immediately see that they had not solved the problem and were able to design new ways to improve navigation before going live with changes.
Impulse: Roll it out
Action: Monitor customer feedback continuously
This will be a nerve-wracking experience, based on how extensive your changes are, but you should feel confident that you have the best information and have made decisions that will improve the success of your web site. It’s important to keep all your existing metrics in place as you go live so that you can immediately identify and correct any problem areas. It’s imperative that you continuously measure not only from a transactional and internal IT perspective, but also from a customer satisfaction perspective to provide insight into why improvements may or may not be working.