Feature request: Intercom analytics

webaroo-com-au-shr_Xn8S8QU-unsplashWe’ve been using Intercom for Now Novel since 2012 and have watched it turn into the behemoth it currently is. It is a great tool that helps us communicate personally at scale.

Intercom doesn’t leave a lot of extra customer value. They know the value of their service and what people’s propensity to pay is and are not afraid to map those two quite close to each other (i.e. its not cheap :)). This could be an additional service they offer that could add more to their offering.

If I look at the tools that we use to get insights from our customers (mostly Hotjar and Google Analytics) we don’t have a tool that focusses on users. Google Analytics is a great tool, but it’s a little hamstrung by being session rather than user based. Sessions are interesting, but on a user level you get a lot more of the user journey and see where they interacted with what portions of the site on an individual level.

We’ve already set up Intercom already with a lot of information that would be great to use to understand our users further. Our Intercom integration has specific questions that are set up at key points in the customer journey to message to customers. This means that you’ve already got the tagging part of the equation sorted out. Intercom prompts are usually set at key parts of the customer journey too (e.g. end of onboarding, start or at payment junctures) Being able to interrogate these would be very helpful. We have already tried to use this before to look at the data, but it isn’t possible to look at it in an easily accessible manner. You have to go into each user and manually look at their progress, you can’t e.g. export a csv with all of a user’s activities.

I’d ideally like to be able to look at a tool that would allow me to:

  • look at users over a specified time period
  • who have achieved a specific aim or goal (which would be one of my triggers) and
  • examine the data around this criteria

The data around this criteria could be other goals, visits, length of time, communication acted upon etc. This starts to be even more interesting when you can import and export this information into other tools (e.g importing my Google Optimize test variation into Intercom for segmentation on how a specific test performs or a specific segment I have created in Intercom that I can start to view more information about in GA).

Intercom must have thought this, so when they need an early beta-tester, count me in.

Books read 2020

  1. Lean customer development- Cindy Alvarez

My first completed book of the year was very useful. I’ve read a fair amount in this area (the book references the Lean Startup and Steve Blank), but this was a useful addition to the canon. The best thing about this book was the  practical /tactical method it was written in. It taught you the theory, but then it shows you exact methods on how to implement your customer learning process and how to quantify and use your learnings. There is immediate practical value I am taking from it to use in my process.

Builder’s Warehouse- perplexing purchase flow

I needed to buy something from Builder’s Warehouse on the weekend and I found their purchasing flow so perplexing that I had to document it.

 

Step 1 of the process was that I wanted to buy an online only product:Screenshot 2019-10-26 at 10.12.16

A couple of comments:

  • Why would you have the option to “select a store” if it is an online only product? It only confuses people when they wonder why there is store availability for an online only product and they question whether they should be shopping from you.
  • “Sign in to buy” is a tough call-to-action. Someone has to have a lot of motivation to get that product to sign up to have the pleasure to give you money. Most visitors would disappear at this stage if they didn’t really love/need that product.

 

Step 2- signup

I am a masochist (and I was intrigued at what would come next in the process) so I clicked through.

Screenshot 2019-10-26 at 10.12.02

More comments:

  • Adding in additional non necessary fields is a sure-fire way to decrease completion of your form (especially if you have to sign in to buy). Why do you need my birthdate? Are you going to steal my identity with my passport/ID number? These are not things I want to share with a brand…
  • The chutzpah of the news form. No option to not receive any news, but an opportunity to choose both email and phone spam. I know this form certainly doesn’t adhere to GDPR, and although I’m not an expert I think it falls foul of the Protection of Private Information Act (and as a consumer its a big conversion killer)

Step 3- registration complete

Screenshot 2019-10-26 at 10.14.16

Now I’ve gone through all the hoops to try and purchase my product, but once I’m registered it spits me out on a registration confirmation page and invites me to go back to the homepage (where I have to find my product again). It would have been easier to save the consumer a couple of clicks and put them back on the product they were looking for.

All in all a very confusing process. If anyone at Builders Warehouse reads this, give me a call, I can help 🙂

 

 

Rules for split testing for startups

hello-i-m-nik-MAgPyHRO0AA-unsplash.jpg

In a startup you have limited resources and you want to get as much bang for these resources. As a result I wanted to recommend some rules that have worked for me:

  • Err on the side of bold testing

Its nice to know exactly what was the underlying cause of the uplift from your test, but if I have the opportunity to run three tests I want them to give me the biggest uplift, not the most knowledge. This means that you’re opportunity for failure will be higher, but when you win it’ll translate to a lot of wins. Meek testing (e.g. button colours, CTA buttons) might work if you’re Google or booking.com, but with limited traffic and resources you want to focus on things that are going to move the needle for you.

  • Make falsifiable hypotheses which will help you learn even if the test doesn’t win

As a corollary to the above you want to learn as much as possible from every test. I like the hypothesis format:

As a results of evidence

We believe that the following changes will result in this effect

We will measure this through this metric

If your hypothesis gives you a learning then it will make it easier for you to create better future tests and iterate on your testing.

  • Each test should run minimum 2 weeks.

If you run your test for less than 2 weeks you won’t get enough traffic and you won’t understand the experience that people have of day parting (i.e. your test results may skew to how people behave on  a weekend or week day). This also depend on the amount of traffic you have, you may need to run a test for months on low traffic pages (which then begs the question, why are you bothering to test this page, see “test important things” point below)

  • If after a month the variant has a significance level of less than 65%, kill the test

You need to keep a decent testing velocity, if the variant is only slightly better than the control then it doesn’t seem like its going to be a winner, and its not worth continuing to test and waste the slot.

  • Significance level of at least 75%

All the tools tell you to test to 95% confidence. If this is your payment page or a crucial page then I’d agree that its worth testing to a high significance, however if it is smaller pages you have the challenge of getting enough traffic and having a number of things to test in your pipeline. In that case using a lower significance level is justified, especially if its backed up by:

-performance over time

Its also interesting to look at how the test has performed over time. If the variation has always been winning then you can make an assumption that it will continue winning.

– secondary metrics/micro conversions

  • Aim for micro-conversions (i.e. next step in process) as opposed to sales/far away goals

Conversions and revenue are your obvious goal, but often you don’t have enough traffic or enough conversions to get statistical significance on these. Micro-conversions can help here; clicks on “Add to basket”, reaching the basket or the next step after the elements you’re testing can add further colour to the picture and will be easier to achieve significance on than metrics further down the funnel.  These can be proxies for conversion especially if you have a few of them to review in addition to your main metrics.

  • Test the most important things

Don’t test your about us page and pages that have a limited impact in the purchase journey. Elements like sitewide navigation, signup, checkout etc impact everyone on site and so will have a much larger opportunity to move the needle when it comes to achieving meaningful increases.

  • Have a roadmap

Having a number of tests planned for each slot helps, then you don’t have to go back to try and puzzle a new test every time you conclude a test. If you invest the time in having one or two tests schedule for each slot then your process works more smoothly.

These are some elements that have helped me keep a healthy test trajectory with limited resources.

Quick customer insights hack

nik-macmillan-280300-unsplash

Retention is a big initiative at Now Novel at the moment and I’ve been looking at speaking to paying users to understand motivation and how to improve our onboarding process.

I have been sending the below message through Intercom to paying customers of our non-coached programs to try and speak to them. They get the opportunity to schedule a meeting with me on Calendly and then we chat.

Screen Shot 2019-06-07 at 08.44.15

  • Its been very effective for a set it and forget it process (and more effective as an email than as an in-app message).My agenda is very loose for these calls:
    • I introduce myself
    • understand their challenges/progress
    • explain the product (while trying not to upsell too much 🙂 )
    • let them know they can always get in touch

     

    Its been very useful and I’m going to continue doing it. It really connects with people that you’re making the effort to speak in person (which isn’t particularly scaleable, but its nice to make the offer anyway). You understand where the gaps are in your onboarding process (we have a couple of features which aren’t as obvious to find, and it seems people aren’t finding them), and you can get feedback from people who are motivated but are finding inconsistencies in the process.

    It’s also fun speaking to a wide range of different people with different backgrounds and motivations. We had a lawyer who purchased in Cape Town/up the road from me, so I went in and spent some time with him in his office rather than speak on the phone.

    In terms of a continuous feedback loop, this is an easy and recommended process.

The importance of high velocity testing

guillaume-jaillet-403241-unsplash.jpg

I’m a big proponent of high velocity testing, and believe the more tests you run the more effective you are. This doesn’t mean that these tests are throw it at the wall kind of tests, they still need to be grounded in research with a decent hypothesis that you validate, but the more you run the more effective your testing programme will be:

  1. the more you test the more you learn

For every test you want a falsifiable hypothesis. This gives you the ability to achieve a learning whether your test wins or not. More learning increases your chance of your test being a winner the next time around.

2. the more tests you run, the more winners you will get

I know its dumb to say, but increasing volume means that you will run more tests, more tests mean you will get more winners (overall), although maybe not as a percentage of tests run. There is no point in running a testing program where you run a test a month and get a winner every two months. That’s dispiriting for the team and doesn’t add value to your bottom line.

3. wins are compounded

Four winning tests with a 5% uplift don’t work out to be a 20% uplift, but the wins are compounded,and it works out to a 22% uplift.

Your processes should remain similar in order to achieve this:

  • look at the slots you have on your site to fill (e.g. for SAAS you’ll have the homepage, signup page, first user experience funnel, payment page etc)
  • ensure that you have a wealth of data around visitor behaviour and challenges, you need to be able to generate a lot of hypotheses
  • Generate these hypotheses, wireframe and get tests ready for each slot (you need to have one or two ready to go in order to be able to switch out quickly)
  • Be ruthless with your testing, don’t be afraid to kill things that don’t look like winners (the other perspective with this is that your hypothesis may be right, but the execution could be wrong, further iteration may be required)
  • It can be useful to try smaller developmental test ideas in here (not red button/green button tests), but tests that are more value proposition/microcopy focussed that don’t require a lot of development

Although it takes a lot more resources in order to increase your testing velocity, the results are worth it in terms of wins and learnings.

Amazon’s dark patterns (and a light one)

As I’m South African based I don’t shop on Amazon that frequently so its always interesting to look at how they manage their site. I know they do a lot of testing, in 2011 they were doing 7000 tests a year and the richest man in the world said:

Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day…” – Jeff Bezos, CEO at Amazon

I’m always surprised at how aggressively they try and upgrade you to Prime. On my most recent purchase you can see the interstitial they used to try and upsell me. In order to not take the Prime offer I have to use an unobtrusive text link rather than the obvious yellow button.

screen shot 2019-01-20 at 10.04.57

I would be worried about all of this dark pattern Prime upselling on the customer experience. I think it’s a little sleazy and underhand but at the same time it doesn’t dissuade me from buying from Amazon (and if you have Prime you stick around forever, so its great for retention). If you look at the last time I purchased from Amazon and somehow signed up for Prime they had some strange ways to try and make me stay.

 

doavk12w0aermvl

Things I find notable about this:

  • making people click “I do not want my benefits” to cancel
  • making cancellation the least intuitive of four buttons on the page
  • the phrase “Unlimited One-day delivery: Direct to your door” has so many great benefits
  • Red vs green type (and the fact that its £0.00 not £0)

Finally here is a nice piece of UI from Amazon for products I’ve previously purchased. It messages that I last purchased it and when. Its easy and allows me to shortcut choosing my product, which at least for me reduced any choice friction and hastened my conversion.

screen shot 2019-01-20 at 09.54.42