A/B Testing
Edited

A/B Testing allows you to experiment with different user experiences to optimize your Jebbit experiences over time! With A/B Testing (sometimes called split testing), you can test and compare different variations of a campaign to determine which copy will yield the strongest performance based on your goals.


Video Tutorial

Step by Step Guide

  1. Right click on the '+' node at the beginning of the builder map, and select 'Split Traffic' from the menu

  2. Select A/B test, adjust the percentages and label each pathway

  3. If you want to test more than two options, select 'Add another branch'

  4. Click 'Save' and connect the A/B paths to their respective screens

Frequently Asked Questions

Q: How do I review the performance of an A/B Test to determine which copy performed best?

A: After launching, you can analyze your results in the reporting dashboards. A simple hack for tracking the performance of one pathway over another is the use of attribute mapping. Check out this video to see how it's done! If you are unsure of how to interpret your A/B Test data, reach out to our Support team and we can help you out.

Q: What can I accomplish with an A/B Test?
A: A great way to run an A/B Test is to figure out whether or not engagement with an Intro screen leads to higher completion rates or not. You can create two paths off of your A/B Split and send 50% of traffic to an Intro screen, and the other 50% of traffic to the first question screen instead. This is just one example of how an A/B Test can be used. You can place an A/B Split anywhere that you would like to within your Jebbit experience.

Q: I set the traffic split on my A/B Test to send 50% of traffic to one pathway and 50% of traffic to another pathway, but it looks like traffic is skewing more towards one pathway over another.

A: The A/B Split works the same way that a coin flip works. Each time you flip a coin, there is a 50% chance that you will get heads and a 50% chance that you will get tails. Each time you flip the coin, the chances of getting heads or tails resets again and has nothing to do with whether you got heads or tails on your last flip. In the same way, each time that a user reaches the A/B split, their chances of being sent down one pathway or another is 50/50, but that chance has nothing to do with any other prior user session. As the data set that you collect reaches a significant volume, you will see the trend between both pathways get closer to 50/50 but it may not ever be a perfect split.

Related Articles


Keywords: ab test, split testing, randomization