UX optimization & A/B testing

UX optimization & A/B testing Tags: Services
on

Sometimes we receive requests for help to improve key performance metrics (KPIs) of various websites, shortly to deliver UX optimizations leading to increased user engagement correlating with increased time and generated revenues.

There are many website owners who think that there is a general truth hidden in the experience of companies like GameArter and we let them immediately know what to change on their web to boost their business up. Sure, there are general best practices and stuff going from common sense, however, for any more advanced optimization, especially for that which can affect revenues, there is need of an A/B testing to deliver UX optimization which really works. The purpose of this article is to bring instructions for basic self-made A/B testing.

What is UX optimization?

UX is a shortcut of “User experience”, an experience a visitor gets on a website. UX optimization is a process of making a website and all its components more clear, trustworthy and friendly to use for visitors. The optimization is being processed by adjustment of a look, structure, content and used technologies behind.

What is A/B testing?

A/B testing is a method of running 2 versions of a targeted page or entire website for various groups of visitors at the same time and comparing their mutual performance in a form of percentage of filled KPI(s) of individual versions. Got data are used for decisions about next steps in the business.

UX and A/B testing may sound complicated, right? Actually, doing basic A/B tests is very simple and moreover cheap - there is not need for any paid tool for it. This article provides basic information of why and how to make basic A/B testing on your website with the use of Google Analytics backed either with Google Optimize or directly via few lines of in-page code.

Before you start…

How to choose the right key performance indicators (KPIs) to track?

Individual websites elements may be more intertwined than they look at the first sight. One metric you improve may result in many other metrics with deteriorated results - for that reason there is need to set proper main KPI. If useful, there is possible to also set less-important smaller KPIs which may indicate the state of the route for achievement of the main KPI. Then, after deployment of the more versions within A/B/../X test you track and evaluate changes of set KPIs between individual web versions.

Example:

You got an idea to increase the number of visited pages per session on your website. With a use of common sense, you realize that you could try to achieve that by a replacement of an ad unit displayed in exclusive space of your web for a section offering recommended articles. By the change, the number of pages per session will probably be increased and KPI filled, however, with a side effect in a form of affected web revenue which is usually a very important parameter. Thus, generated revenue should be another KPI parameter (maybe even main) for correct evaluation of the test.

KPI(s) are set, how to do tests and evaluate them?

Firstly, there is a need to set the number of versions you are going to test and compare. Selection of number of versions may be relevant to number of ideas and traffic - in a case of smaller traffic and usage of predictions provided by tools for A/B testing, tools using frequency statistics may require minimum traffic. On the other side, even in a case of use tools without any minimum required traffic (Byesov's statistics) there is a question whether it's worth it to do tests on small traffic. Optimum traffic for 1 version starts on at least hundreds of visits.

Every version to be tested will be mentioned with one letter - A/B test for 2 versions, A/B/C test for 3 versions etc while A version always keeps default version of the web.

The length of the test may vary on time options and traffic high, however there is recommended to keep the test active for at least 30 days to eliminate random influences.

Tests themselves can be processed and analysed either manually or with the help of a specified tool. Anyway, there is good to know at least technological theory behind A/B tests, so we will mention here both cases - manual way to explain theory and practice way with the use of Google Optimize.

Manual A/B tests

A/B...X tests can be made manually with a use of custom script and any analytics tool like Google Analytics. The point of A/B...X tests is randomly and evenly distribute visitors into a number of groups equivalent to the number of web versions intended for the test. Individual groups of web versions (and their visitors) must be filtratable in the analytics and allow us to compare various metrics and KPI(s) across versions tested on the website / webpage.

Technically,

  • Random and even distribution of visitors between various versions of the website may be processed with a use of a function generating a random number between 0 and X.
  • Based on the selected version for the user, the user must be attached into the right group in the analytics (for comparison of metrics between individual versions of the website). In the case of Google Analytics, it may be done with a use of Custom dimensions.
  • There must be displayed the same version of the website during every visit of the user for the entire period of the test. It is possible to mark the proper version of the website to render for the user with a self-side browser cookie.

These steps allow check statistics of individual groups (filtrated with segments) as well as individual users. We have used this method for its code efficiency (nearly no external javascript required) for getting information about affect of number of ads for user retention on gaming website PacoGames.com.

This method allows do website modifications by 2 ways:

  1. On the server side - Server renders final web version based on cookie information
  2. On the client side - Server renders always default version of the website which is modified by in-page javascript. This method is less effective, but it is also usable for cached websites.

A/B testing with Google Optimize

Google Optimize is an optimization tool allowing to run experiments on the web and identify the leading variant. Basic Google Optimize is free to use AúB testing and web personalization tool, Optimize 360 with advanced features is a paid tool with custom pricing. Google Optimizes uses Byesov's statistics method and does not require any minimum traffic.

Google Optimize allows in-page client side modification of a single page or even group of them via custom editor available as a browser extension, allowing setting customization right on the page. After connection of Google Optimize with other Google tools - Google analytics and Google ads, and setting experiments, Google Optimize does everything automatically - you can check performance of individual experiments (website versions) right in Google Analytics or simply take the experiment marked as leading in the tool after the test.

Although Google Optimize has many limitations going from the way it works (client-side modification reflected in lower speed performance, a bit problematic for bigger adjustments), it is definitely a great tool for all marketers and non-programmers, allowing them simply optimize conversion rate of their website.

Few more information you should know before doing A/B tests

  • Right selected KPI is the basis of success. If you do not know what to track, do not waste time with A/B tests. If you are selecting KPIs to track and improve, and a way to do that, do it always with understanding to your audience. Do not follow metricks of your competitors available for example on SimilarWeb.com, and if you do, be sure you compare metrics generated by a very similar audience - source of visits, age, interests…
  • Always test only pages relevant to actual KPI(s), do not mix multiple possible mutual related tests which could correlate each other and affect final results if you are not 100% sure about that.
  • Always optimize first pages with the biggest impact for your business.
  • If you decide improve engagement KPIs (time on the page, pages per session etc.) with a use of recommended content provided by a use of neuron network trained for offering content with the best engagement metrics, while you offer user-generated content on the web, take into notice that there is not possible to set mantinels between which the recommended content should be offered by the AI, which may result in allowing creation of various extremists groups powered with recommended extremists content.
  • If your website uses a simple design, running even tests of bigger customizations is very cost effective, usually possible by placing only a few lines of code into the page.
  • You should check heat maps and other analytics data before considering of setting every new experiment.

If you would need help with A/B … X tests, GameArter can help you within its custom services.