Builder’s Warehouse- perplexing purchase flow

I needed to buy something from Builder’s Warehouse on the weekend and I found their purchasing flow so perplexing that I had to document it.

 

Step 1 of the process was that I wanted to buy an online only product:Screenshot 2019-10-26 at 10.12.16

A couple of comments:

  • Why would you have the option to “select a store” if it is an online only product? It only confuses people when they wonder why there is store availability for an online only product and they question whether they should be shopping from you.
  • “Sign in to buy” is a tough call-to-action. Someone has to have a lot of motivation to get that product to sign up to have the pleasure to give you money. Most visitors would disappear at this stage if they didn’t really love/need that product.

 

Step 2- signup

I am a masochist (and I was intrigued at what would come next in the process) so I clicked through.

Screenshot 2019-10-26 at 10.12.02

More comments:

  • Adding in additional non necessary fields is a sure-fire way to decrease completion of your form (especially if you have to sign in to buy). Why do you need my birthdate? Are you going to steal my identity with my passport/ID number? These are not things I want to share with a brand…
  • The chutzpah of the news form. No option to not receive any news, but an opportunity to choose both email and phone spam. I know this form certainly doesn’t adhere to GDPR, and although I’m not an expert I think it falls foul of the Protection of Private Information Act (and as a consumer its a big conversion killer)

Step 3- registration complete

Screenshot 2019-10-26 at 10.14.16

Now I’ve gone through all the hoops to try and purchase my product, but once I’m registered it spits me out on a registration confirmation page and invites me to go back to the homepage (where I have to find my product again). It would have been easier to save the consumer a couple of clicks and put them back on the product they were looking for.

All in all a very confusing process. If anyone at Builders Warehouse reads this, give me a call, I can help 🙂

 

 

Rules for split testing for startups

hello-i-m-nik-MAgPyHRO0AA-unsplash.jpg

In a startup you have limited resources and you want to get as much bang for these resources. As a result I wanted to recommend some rules that have worked for me:

  • Err on the side of bold testing

Its nice to know exactly what was the underlying cause of the uplift from your test, but if I have the opportunity to run three tests I want them to give me the biggest uplift, not the most knowledge. This means that you’re opportunity for failure will be higher, but when you win it’ll translate to a lot of wins. Meek testing (e.g. button colours, CTA buttons) might work if you’re Google or booking.com, but with limited traffic and resources you want to focus on things that are going to move the needle for you.

  • Make falsifiable hypotheses which will help you learn even if the test doesn’t win

As a corollary to the above you want to learn as much as possible from every test. I like the hypothesis format:

As a results of evidence

We believe that the following changes will result in this effect

We will measure this through this metric

If your hypothesis gives you a learning then it will make it easier for you to create better future tests and iterate on your testing.

  • Each test should run minimum 2 weeks.

If you run your test for less than 2 weeks you won’t get enough traffic and you won’t understand the experience that people have of day parting (i.e. your test results may skew to how people behave on  a weekend or week day). This also depend on the amount of traffic you have, you may need to run a test for months on low traffic pages (which then begs the question, why are you bothering to test this page, see “test important things” point below)

  • If after a month the variant has a significance level of less than 65%, kill the test

You need to keep a decent testing velocity, if the variant is only slightly better than the control then it doesn’t seem like its going to be a winner, and its not worth continuing to test and waste the slot.

  • Significance level of at least 75%

All the tools tell you to test to 95% confidence. If this is your payment page or a crucial page then I’d agree that its worth testing to a high significance, however if it is smaller pages you have the challenge of getting enough traffic and having a number of things to test in your pipeline. In that case using a lower significance level is justified, especially if its backed up by:

-performance over time

Its also interesting to look at how the test has performed over time. If the variation has always been winning then you can make an assumption that it will continue winning.

– secondary metrics/micro conversions

  • Aim for micro-conversions (i.e. next step in process) as opposed to sales/far away goals

Conversions and revenue are your obvious goal, but often you don’t have enough traffic or enough conversions to get statistical significance on these. Micro-conversions can help here; clicks on “Add to basket”, reaching the basket or the next step after the elements you’re testing can add further colour to the picture and will be easier to achieve significance on than metrics further down the funnel.  These can be proxies for conversion especially if you have a few of them to review in addition to your main metrics.

  • Test the most important things

Don’t test your about us page and pages that have a limited impact in the purchase journey. Elements like sitewide navigation, signup, checkout etc impact everyone on site and so will have a much larger opportunity to move the needle when it comes to achieving meaningful increases.

  • Have a roadmap

Having a number of tests planned for each slot helps, then you don’t have to go back to try and puzzle a new test every time you conclude a test. If you invest the time in having one or two tests schedule for each slot then your process works more smoothly.

These are some elements that have helped me keep a healthy test trajectory with limited resources.

Quick customer insights hack

nik-macmillan-280300-unsplash

Retention is a big initiative at Now Novel at the moment and I’ve been looking at speaking to paying users to understand motivation and how to improve our onboarding process.

I have been sending the below message through Intercom to paying customers of our non-coached programs to try and speak to them. They get the opportunity to schedule a meeting with me on Calendly and then we chat.

Screen Shot 2019-06-07 at 08.44.15

  • Its been very effective for a set it and forget it process (and more effective as an email than as an in-app message).My agenda is very loose for these calls:
    • I introduce myself
    • understand their challenges/progress
    • explain the product (while trying not to upsell too much 🙂 )
    • let them know they can always get in touch

     

    Its been very useful and I’m going to continue doing it. It really connects with people that you’re making the effort to speak in person (which isn’t particularly scaleable, but its nice to make the offer anyway). You understand where the gaps are in your onboarding process (we have a couple of features which aren’t as obvious to find, and it seems people aren’t finding them), and you can get feedback from people who are motivated but are finding inconsistencies in the process.

    It’s also fun speaking to a wide range of different people with different backgrounds and motivations. We had a lawyer who purchased in Cape Town/up the road from me, so I went in and spent some time with him in his office rather than speak on the phone.

    In terms of a continuous feedback loop, this is an easy and recommended process.

The importance of high velocity testing

guillaume-jaillet-403241-unsplash.jpg

I’m a big proponent of high velocity testing, and believe the more tests you run the more effective you are. This doesn’t mean that these tests are throw it at the wall kind of tests, they still need to be grounded in research with a decent hypothesis that you validate, but the more you run the more effective your testing programme will be:

  1. the more you test the more you learn

For every test you want a falsifiable hypothesis. This gives you the ability to achieve a learning whether your test wins or not. More learning increases your chance of your test being a winner the next time around.

2. the more tests you run, the more winners you will get

I know its dumb to say, but increasing volume means that you will run more tests, more tests mean you will get more winners (overall), although maybe not as a percentage of tests run. There is no point in running a testing program where you run a test a month and get a winner every two months. That’s dispiriting for the team and doesn’t add value to your bottom line.

3. wins are compounded

Four winning tests with a 5% uplift don’t work out to be a 20% uplift, but the wins are compounded,and it works out to a 22% uplift.

Your processes should remain similar in order to achieve this:

  • look at the slots you have on your site to fill (e.g. for SAAS you’ll have the homepage, signup page, first user experience funnel, payment page etc)
  • ensure that you have a wealth of data around visitor behaviour and challenges, you need to be able to generate a lot of hypotheses
  • Generate these hypotheses, wireframe and get tests ready for each slot (you need to have one or two ready to go in order to be able to switch out quickly)
  • Be ruthless with your testing, don’t be afraid to kill things that don’t look like winners (the other perspective with this is that your hypothesis may be right, but the execution could be wrong, further iteration may be required)
  • It can be useful to try smaller developmental test ideas in here (not red button/green button tests), but tests that are more value proposition/microcopy focussed that don’t require a lot of development

Although it takes a lot more resources in order to increase your testing velocity, the results are worth it in terms of wins and learnings.

Amazon’s dark patterns (and a light one)

As I’m South African based I don’t shop on Amazon that frequently so its always interesting to look at how they manage their site. I know they do a lot of testing, in 2011 they were doing 7000 tests a year and the richest man in the world said:

Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day…” – Jeff Bezos, CEO at Amazon

I’m always surprised at how aggressively they try and upgrade you to Prime. On my most recent purchase you can see the interstitial they used to try and upsell me. In order to not take the Prime offer I have to use an unobtrusive text link rather than the obvious yellow button.

screen shot 2019-01-20 at 10.04.57

I would be worried about all of this dark pattern Prime upselling on the customer experience. I think it’s a little sleazy and underhand but at the same time it doesn’t dissuade me from buying from Amazon (and if you have Prime you stick around forever, so its great for retention). If you look at the last time I purchased from Amazon and somehow signed up for Prime they had some strange ways to try and make me stay.

 

doavk12w0aermvl

Things I find notable about this:

  • making people click “I do not want my benefits” to cancel
  • making cancellation the least intuitive of four buttons on the page
  • the phrase “Unlimited One-day delivery: Direct to your door” has so many great benefits
  • Red vs green type (and the fact that its £0.00 not £0)

Finally here is a nice piece of UI from Amazon for products I’ve previously purchased. It messages that I last purchased it and when. Its easy and allows me to shortcut choosing my product, which at least for me reduced any choice friction and hastened my conversion.

screen shot 2019-01-20 at 09.54.42

Sign up page case study- +14% conversion

As part of the Now Novel first user experience we are constantly testing the signup process. You’ll see our thinking from a few years ago here.

We arrived at the last version of our signup page through testing, but I thought it wasn’t great. There is a lot of explanatory content on the page, but its mostly superfluous and distracting. Rather than trying to funnel people into conversion it is doing the job that a page further up the funnel should be doing. It’s more like a landing page than a pure signup page. Don’t get me started on the FAQ that i think are actually causing doubt more than countering objections.

 

control

 

Looking at heatmaps we could see that people weren’t scrolling the page, there was a large amount of drop-off seen in GA and there were a lot of concerns about privacy and whether or not people had to pay that we got from analysing single question survey feedback.

In terms of evidence to change the page we used the following:

  • Most people were coming from the homepage, so had an understanding of the service (which means that all of this additional information was superfluous).
  • We have a lot of information about the benefits from paying users using long form surveys that we could incorporate on the page.
  • We had jobs-to-be done research and understood what role we are employed for.
  • Single question surveys and customer interviews gave us an idea about the concerns people had (concerns were mostly around their ability, cost and whether it would work).

This allowed us to put together a page that catered to countering objections, a headline that gives an expectation for the future on a short page.

screencapture-localhost-3000-users-register-2019-01-28-10_15_05

The above version of the page we ran for a month and we had inconclusive results. The feedback from visitors that we got from single question surveys on the page was that they were worried about:

  • whether they had to pay
  • if they could trust the service

With our next incarnation of the page we iterated a little further.

We cleaned up the expectation around “What do I get?” a little, but the major change was that we added a large compelling testimonial to help give people reassurance that this was a trustworthy service.

test

This test was a wholehearted success with a 14% increase in conversion at a 93% significance.

The major learning it gave us was that a strategic testimonial can give a lot more trust and increase conversion. We’ll be testing this in other areas of the site.

3 product development tips for 2019

Last year I had some good results with some of the (many) product development tactics I  tried, and thought I would share as with CRO tips I shared.

  1. With Now Novel we consistently try to understand and test the optimum first user experience, and we spent some time this year trying to perfect our onboarding flow.  One of the first thing we looked at was understanding the product through a Jobs-to-be-done  lens. Intercom have written a lot about this and their book is fairly useful. We undertook more surveys, spoke to customers to understand their motivations and intentions a little better. What we found was that there were three distinct groups:
  • people who wanted to start writing
  • people who had something and wanted to progress, and
  • people who wanted to learn about the craft of writing

Armed with this information we could construct an onboarding flow that catered to our prospective client’s job-to-be done and so improve their satisfaction by exposing them to a relevant part of the product and ultimately increasing conversion. Very useful in this was Samuel Hulick’s onboarding book (you may recognise him from his zany first time onboarding teardown’s).

The above image is a powerful distillation of that book (and the idea of onboarding)

2. Actually asking for the sale is something I have forgotten to do enough of. You think its implicit that your product is for sale and that you would be guiding people to purchase, but with freemium there are a lot of opportunity to reiterate and improve your selling. Reading this article was illuminating in that it showed all of the options for cross selling and a methodical process for achieving it. We’ve started a process of integrating nudges (step 1 is a simple header for encouraging unpaid members to purchase) which has accounted for 13% of all sales since we launched it. We have a few more planned for very soon, the one I am most interested in is the freemium pop-up: as you reach the border of the freemium offering you get a pop-up that outlines the benefits of paid membership. The line that needs to be walked is how many of these can you use that don’t piss your audience off too much, whilst encouraging them to convert (great copy and a clearly communicated value proposition is key here).

3. Pricing is often seen as more art than science, we undertook some research to add some science to it. We asked about the benefits of our product and the pricing that would be:

-too expensive

-getting expensive

-a good deal

-too cheap

and then plotted all of the responses on a graph.

screen shot 2019-01-23 at 05.45.29

the area between all lines is the range where you can price (in the above example between $27 to $43). This gives you a much more informed perspective about where you can price your product.