What is A/B testing
Test different solutions against each other in your product’s production environment to see which performs the best.
👥 1000 and more | ⏰ weeks | 💪🏼 medium effort
Widely practiced for small, incremental improvements to a product, multivariate testing and cohort analysis can also play a significant role in validating whether a larger-scale prototype gains traction with a team’s users in production environments.
While some teams may initially be uncomfortable with the idea of rapidly developing and deploying small prototypes to production users, these methods may be the fastest and most efficient way for validating ideas or solutions with real, large-sample results. The findings from A/B testing are quantitative, providing the team with confidence that winning variants will actually deliver value and improve the current experience.
Who is involved in A/B testing?
A/B testing requires an analytics or reporting tool currently functioning in your production environment; an analytics team to pull data from the tool and, in some cases, sanitize the results; and a team to sit down and discuss the results of the multivariate testing, potentially even the reasons why a certain variant performed better than others.
How to do A/B testing
- Ensure that there is a way to deploy the variants of an A/B test to your production environment; if not, discuss options with your technology manager or CIG.
- Fully develop and QA each variant for deployment; make sure that the variants can coexist peacefully with the other features in your product.
- After deployment, discuss how long the A/B testing needs to run, based on how accurate you’d like the results to be; generally, greater accuracy needs a longer test-run.
- After the established length of runtime, ask the analytics team to determine which variant (or whether the control) improved your team’s main KPI, and discuss the reasons behind this result.