The company plans to use the funds to expand internationally.
Easy-to-understand metrics are appealing, but they may not help e-retailers maximize profits.
Our businesses are complex. Our web sites are complex. Our customers are complex. The combination of the three is incredibly complex. So why do we seem to be on an endless quest to measure e-commerce in the simplest possible manner?
Don’t get me wrong. I understand the appeal of simplicity, especially when you have to communicate up the corporate ladder. While the allure of a simple metric is strong, I fear overly simplified metrics are not useful. If we create a metric that’s easy to calculate but not reliable, we run the risk of endless amounts of analysis trying to manage to a metric that doesn’t actually have a cause-and-effect relationship with our financial success.
To be clear, I’m not proposing more complicated metrics, but better quality metrics. By taking the time to find meaningful metrics, we can reduce the volume of data we regularly manage in favor of focusing on fewer metrics that have more value. Such metrics will likely require more complicated analyses, but accurate, actionable information is worth a bit of complexity. And quality metrics based on complex analyses can still be expressed simply.
Let’s consider two popular retail metrics that may not be all that they seem.
1. Site Conversion Rate
Conversion is a metric that has been used in physical retail for a long time, and it makes good sense in stores where the overwhelming purpose is to sell products to customers on their current visit. But e-commerce sites have always been about more than the Buy button, and they’re becoming more all-purpose every day. Retail web sites are marketing and merchandising vehicles, brand builders, customer research tools, and sales drivers, both in-store and online.
Given the multiple purposes of our retail sites, focusing on a metric that covers only one purpose not only understates the total value of our sites, it also causes us to churn unnecessarily when features or marketing programs that drive higher traffic-and accomplish important goals, such as engaging customers and driving them to physical stores-don’t convert to an online purchase on that particular day.
We still need to track the sales-generating capabilities of our web sites, but we want to find a metric that actually proves our ability or inability to convert the portion of our sites’ traffic that came to buy.
When I was at Borders Group Inc. as head of e-commerce, we used our web site for many purposes, and not surprisingly we found that changes in site conversion rate didn’t have much to do with changes in sales. If we wanted to focus on a metric that tracked our selling success, we needed to focus on the type of traffic that likely came with intentions to buy (or to at least eliminate those visitors who came for other reasons.)
We knew through our ForeSee Results customer satisfaction analyses that customers who came to buy on that visit were only a percentage of our total visitors; the rest came for other reasons, such as researching products, finding stores and checking store inventory.
To focus on the visitors with purchase intentions, we came up with something we called “true conversion” that measured conversions. By trial and error we came up with the following formula that correlated well with changes in order volume: (adds-to-cart divided by product page views) multiplied by (orders divided by checkout process starts).
For example, (250 adds-to-cart divided by 1,000 page views) times (50 orders divided by 100 checkout starts) or 0.25 times 0.50, would provide a true conversion rate of 12.5% that was far more correlative to sales than a basic site conversion rate. That’s 12.5%, in other words, of shoppers with a strong intent to buy who completed a purchase.
We still needed to do more work matching the survey data to online shopping path analysis, but this measure of true conversion gave us an actionable conversion metric and focused us on specific areas of the site to improve for increased sales. We realized that by prioritizing improvements on these specifically measured areas of the site above other site areas, we would most directly affect our ability to improve sales.
For example, we prioritized improving product descriptions on the product page and improved our promotional capabilities in checkout over certain home page improvements-and the result was increased sales.
As shown in the graph above, which is based on actual metrics at Borders, our measure of true conversion rate (in yellow) correlated with sales (in blue) much more closely than did site-wide conversion rate (in red). (Actual figures from this graph were not available for publication.)
2. Task Completion
I love the concept of measuring whether a consumer was able to accomplish what she wanted to do when she visited a retail site. But it’s critically important that we also gather the kind of data that will help us understand why a task was or was not completed.
I worry that we sometimes ask about task completion, such as completing checkout or finding a local store address, in too simple a manner for us to gain actionable insight regarding why the task was or was not completed. Simply knowing whether or not a customer completed her intended task is useful, but if she failed to complete the task we’re left guessing about what stopped her.
We have to ask carefully selected follow-up questions to understand the obstacles she may have experienced, and we have to perform detailed analysis of actual site usage in order to identify areas for improvement on our sites. Such analysis-especially when it includes understanding customer mindset-requires complicated methodologies and can’t be made actionable by simply asking if customers completed their tasks when they visited the site.
Because our sites and customers are complicated, figuring out how to solve for the gap between intention and action requires the analysis of millions of variables, which can include a broad range of possibilities like how fast page content loads and the size and location of Buy buttons. For example, when our analysis at Borders highlighted issues with search, we followed up with a question about what would make our site search more useful. We found that using words rather than icons for some search results display options, like “cover view” or “list view,” made a significant difference in customers’ successful use of our search results.