Often on low traffic sites, you’ll be waiting quite a long time for a split test to show good results. And by the time you get a decent level of numbers, will the trend have changed?
So it’s time to stop looking at just visitors and conversions and have a look at the other metrics available to you.
So, if one of your experiments is looking a little like this:
Its time to start looking at this:
So, have both open, set Analytics to the date range that matches the start/end of the experiment and start comparing performance.
Average time on page
Which page are users staying on the longest? If a variation’s time on page is significantly higher, is the content better structured, better wrote or cleaner? A higher time on page is a positive degree of engagement.
How many people came to your site, didn’t like the first thing they saw and then run out crying. This is a clear cut way of seeing which variation people marmited. A lower bounce rate meant people like what they saw, and stuck around because of that. Lower bounce rate is a positive degree of engagement.
Depending on the purpose of your site, this is either a good or a bad thing.
If the experiment page is meant to be the last page people see on your site, like a page that encourages the user to click on external links, then a high exit rate might be exactly what you want.
If the experiment page needs to keep the visitor on your site, then a lower exit rate might be what you want to see.
Segment Cross Contamination
Sometimes you will need to dig deeper and further segment the visitors who partook in the experiment. Have a look and see how the following segments compare:
- New vs Returning visitors
- Paid vs non paid visitors
- Direct, Referral and Search Traffic
Avinash Kaushik has a great article on applying statistical significance in A/B testing. I’d advise you to read it!
Any questions or comments, put it below!
Thank you for reading.