Conversion Rate Optimisation (CRO) is the process of understanding pain points associated with a conversion path (e.g. from click to sale, lead or other important business metric) and creating test hypotheses from which to create incremental growth.
(Originally written in 2015 but still applicable today).
If you analyse incoming traffic sources, device usage and perhaps other segments like country of visit, recency, bounce and exit rates via your analytics tool, you’ll quickly discover some interesting data points.
Perhaps your bounce rate on the homepage is higher than the oft cited 30% benchmark? Maybe on desktop devices it is much lower and mobile much higher? Or perhaps certain countries don’t progress beyond the homepage compared to others?
Perhaps those coming from organic search are more likely to view products than from other traffic sources?
Whatever the insights and the conclusions you draw, your ultimate goal concerning conversion rates, is improvement. This could be a conversion from homepage to product page, or from homepage to purchase completion. The point is, your single minded focus will likely be on CRO metrics.
For those more experienced, you’ll add add layers of user testing, focus groups, customer calls, heat-maps, session replays and many others to give you deeper and more meaningful insights, if not the answers to creating better conversion rates.
Let me explain my story and how, despite being experienced in CRO, my purchase of a hotel room through very good CRO and site optimisation practices actually left me quite upset with the resulting stay…
Setting the scene
I recently booked a hotel room online using a popular booking website. The site will go unnamed as it isn’t important to know which for the sake of this autopsy. It is a booking website which has implemented the widest array of conversion rate techniques I’ve seen. It is very impressive.
It was the first time I’d purchased from the site, clicking through via a paid search ad to a landing page with hotels in San Francisco.
The tools available and nicely laid out design enabled me to find decent looking hotels fairly quickly. I picked up on a particular hotel with a good rating (4 out of 5 I believe from their own ratings, 3 out 5 on TripAdvisor). There were useful little prompts around the page, lots of information and in particular the hotel photography looked great.
After reviewing other hotels around the same price point and star rating I decided to go ahead with the hotel I initially looked at. Why? It was close to the sea and accessible to the main town, had good transport links, pretty good reviews and although a little more than what I was prepared to pay, the photography made it look so very nice.
It was a short business trip and I arrived at the hotel which admittedly already felt like it wasn’t in the nicest of locations. Nevertheless the hotel looked okay from the outside. The foyer area was nicely decorated however this is when the reality sank in.
There was a pungent smell of Chinese food in the corridor with dated decor beyond the foyer. Turned out there was a Chinese restaurant situated on the ground floor of the hotel. Indeed breakfast and dinner were served here included in the stay. Everyone loves Chinese food but its not the smell you want to wake up to, especially since the rooms surrounded an open area from where the smell would waft in.
There was a communal room, unused with dated furniture and a kind of ‘old’ smell about it. The elevator was small, noisy and actually felt like it was ready for retirement. The outside of the rooms made it feel more like a motel than a hotel. Inside the room the smell felt quite clinical, with a strong chemical whiff, like a freshly disinfected hospital bed.
The bedding was quite worn, almost as much as the carpets and furniture. The decor was outdated with no AC (we’re talking the sunny US West Coast here) with a ceiling fan. You get the picture.
So what went wrong in my expectations against what I booked?
The CRO process
In CRO, its important to break your metrics down into macro and micro level goals which might include clicking an ad (i.e. click through rate – CTR – from view to click) or purchasing an item (involving many more clicks and thus conversion points).
Analysing a typical view of the customer journey, there are a few ways to squeeze more sales further down the funnel.
A popular method is to increase the width of the funnel through acquiring more traffic. Another option is to reduce people exiting from the lower portion of the funnel, likely your checkout or sign-up page. Both are aimed at pushing more customers down toward your end conversion goal of more leads or sales.
So it would stand to reason that at every customer touchpoint, from seeing an ad through to the landing page and each subsequent click through your site, the funnel should be optimised to enable you to glide through your journey as a bird soars the skies.
Each area of the funnel has its own methods of uncovering insights and building hypotheses and test plans to help create more conversions. You might run some face to face usability tests to gather insights, record and replay user sessions on your site, dive into analytics, load some heat maps or remote user testing.
CRO as a framework and set of processes can create the environment for good insights, hypotheses and tests. Yet that’s not all there is to it.
Avoid the CRO turbulence
Picture this – you want to buy a car and have a budget in mind. You search online, find the make and model that interests you then visit your local dealer to find out more about the car.
The dealer is an experienced salesman with his monthly sales targets. What’s going through his mind as you approach? Possibly on hitting his end of month revenue target and what he’ll do with the extra cash that month.
He’s probably not thinking about your financial situation and preferences as much as you are.
And this for me is the CRO sucker punch. Placing over emphasis on the quick sale or sign-up to the detriment of customer satisfaction. That is, satisfaction after the sale is complete and the purchased item has been consumed, received or otherwise utilised.
CRO has become a game of science, insights and intellect. There are studies on how the brain reacts to colours and images, of how the right persuasive copy can lead the user anywhere, insights on design languages and Z or F layouts, button placements, photography, font sizes and styles. But if you’re working tirelessly on improving your site’s conversion rate at every opportunity, where do you draw the moral line on whether you’re misleading the consumer?
Digital marketing (under which I’m placing CRO for convenience) is often far removed from the product/merchandising and ops teams. ‘We’ bring the traffic and convert it, they ensure it’s in stock and deliver it. Who’s owning the customer? That’s covered by the Customer Services (CS) team right. Not entirely.
If you believe CS own the customer then you’re missing the fact that customers contacting CS, either pre or post purchase most likely have questions on things you couldn’t answer online.
Measure customer satisfaction
By collecting feedback on the customer’s experience through the process you can reign in your CRO mastery to ensure your scientific experimentation doesn’t inadvertently create brand damage. There are a few options here:
- Run an exit survey to track and monitor satisfaction with aspects of the experience such as design, usability and information
- Survey customers on the order confirmation page to measure overall satisfaction with their experience
- Send a Net Promoter Score (NPS) survey a week after the item has been delivered or in the case of a holiday, after they have arrived back.
- Measure customer service enquiries for volume and topic
The intent is to create a benchmark to track CRO outcomes against. You will want to maintain a positive level of feedback through the experience which CRO analytics will not be aware of, especially if it affects the customer after purchasing. The impact of your CRO work does not end with a purchase.
These following techniques are useful to benchmark on a regular basis to look at the wider impact of your CRO programme on site accessibility, usability, design, functionality and most importantly, direct impact on the user’s experience of your brand or business.
1. Exit surveys
Exit surveys have taken a back seat on many insights programmes in favour of the quick single question formats introduced by the likes of Qualaroo (formerly Kiss Insights) and WebEngage. However they still have a part to play.
The mechanic is to pop up a notification whilst the user is on your website asking them if they would like to participate in a survey after they have finished browsing. The survey is then presented either as a new tab/window for them to respond to later, or using ‘exit intent’ the survey can be displayed once the user has moved their cursor toward the toolbar of the browser (doesn’t work on touch only devices such as mobiles and tablets).
The exit survey is useful to screen users but to also create an ongoing gauge of satisfaction with aspects of the user journey, such as design, functionality, site speed etc as well as knowing why they left if they didn’t purchase or sign up.
2. Confirmation page surveys
These are similar to the exit survey however you’re instead targeting your customers – those that have purchased or signed up. The exit survey will primarily cover those that have not converted (likely the majority of your users) but there is also some merit in gaining similar feedback as the exit survey with customers; how did they find the checkout/sign-up page? Were there any barriers? What persuaded them to buy/sign up?
You can segment responses here with those on the exit survey if running in parallel to understand what impact your existing CRO tests are having when measured over time.
3. Net Promoter Score survey
Net Promoter Score (NPS) is a unified measure of customer satisfaction based on whether a customer would recommend you, or not, to friends and family.
The idea being that you can measure your brand advocacy over time. There is a good relationship between business success and NPS. NPS scores range from -100% to +100% (so a swing of 200 percentage points) and is calculated by allowing customers to score you from 1 to 10 and taking the score of those giving 8 or above and detracting those giving a score of 5 or below (Promoters – Detractors). In this model a 6 or 7 is on the fence (Passives).
NP is best conducted once the user has received the goods or started using your service, so they know what they’ve bought into. The NPS score is your gauge of overall satisfaction.
If a CRO test led to many people being mis-sold a product or service, you can bet your NPS score will take a plummet. As with everything, segmenting your NPS score will give you the most insight (new v return customers, by country, sales channel etc).
4. Measure enquiries to customer services
Another important measure and arguably one which you want to reduce is the number of enquiries to your CS teams, whether calls or emails.
A good website experience tailors to most of the user’s needs throughout the journey. It’s important to create a feedback loop with your CS team and keep them involved in CRO test planning as they’re likely on the front line and understand customer pain-points better than anyone.
It may not form part of your CRO goal to reduce CS enquiries (unless optimising your FAQs to support conversion) however often the CS team are hit with enquiries directly related to your CRO tests. Perhaps you’re AB testing the payment page and a customer is seeing something different to the CS advisor?
Make sure the team are fully aware of tests and how to replicate the same versions customers are seeing.
An interesting measure is to monitor not only the impact on CS volumes but also the type of enquiries coming in. Perhaps you’re generating more support requests due to some new process you’re testing on your C variant? Maybe it only affects return customers/visitors? Keep CS close to your CRO programme.
And so I wondered, looking from the outside at a company investing heavily in CRO, whether they were aware of this one customer that had a bad experience because the site optimisation was so good?
I’m yet to receive a ‘review this’ email to give them my feedback (may have gone to the junk folder, in which case they have other problems to sort out). How many customers of yours may be churning after one bad experience where they felt mis-sold but not compelled enough to let them know?
In my case the hotel actually reimbursed the stay on the grounds of being mis-sold and said they’d discuss with the booking website which was listing them.
If you don’t have the mechanisms in place to measure the end-to-end customer experience then you may never know why customers don’t come back and ultimately pay the price in future growth. Are you taming your CRO beast?
This post originally appeared on econsultancy.com in 2015 after a business trip to San Francisco made a nice case study on good and bad CRO.