So you’ve heard about A/B testing through the grapevine.
Maybe your competitor casually mentioned it on a podcast or your marketing team thinks it’s a great idea. Or the growth agency you’re in talks with wants to run A/B tests.
Growing a business isn’t easy. You know that. Having made it this far, there must have been several occasions when you’ve struggled to understand if a business decision is the right one.
Wouldn’t it be nice if instead of taking a gamble on what you think your audience will want, you had a way of knowing for sure?
That’s what A/B testing does.
Sure, the benefits of A/B testing transcend making data-driven decisions but at its core, it is the secret sauce to unlocking growth at scale.
So whether you attempt to DIY or outsource your needs, here’s everything a Shopify entrepreneur like you should know about A/B testing.
What Is E-commerce A/B Testing?
A/B testing is a process by which you can understand what your audience is looking for before they become a customer.
Usually, A/B tests are thought of in terms of making minor tweaks like changing the color of the call-to-action (CTA) button or adding a new headline but it’s deeper than that.
It allows you to determine what copy, design, and functionality (UX) resonate with your visitors by pitting one version of your page or element on a page with a variation to see what works.
And we’re just getting started!
You can take the concept of A/B testing and apply it to every channel you use and interaction you have with your audience.
But it’s important to understand e-commerce A/B testing is different compared to other verticals like B2B SaaS.
- Time to realize revenue is shorter
A/B testing can reveal the impact on revenue much faster than traditional B2B. In B2B, deals are multi-threaded, have multiple decision-makers making the sales cycles span months if not quarters.
Although you should ideally use A/B testing for research and risk mitigation and not just revenue augmentation, the lifeblood of any business is revenue so there’s a good reason to introduce A/B testing into the growth mix.
- Checkout processes are complex (so more room for testing)
While e-commerce purchase funnels are not complicated like B2B, the checkout process is not one-dimensional.
Ruben De Boer, the author of Psychology of Buying, explains that paying literally hurts. In a 2007 study to investigate how people weigh factors to make purchasing decisions, participants were shown product images and then the price. Their brains were analyzed by fMRI machines to see which neural pathways would light up.
As expected, seeing product images lit up the reward center in their brain.
But the price? The part of the brain associated with physical and social pain lit up like a Christmas tree helping researchers conclude that the trade-off between gain and pain must make sense for consumers to open up their wallets.
That doesn’t mean you have to lower your prices because pricing also signals the quality of the product. You can try a smaller font, offer prepayment, show discounts in a bigger font, or avoid money language in your copy.
So lowering the pain of purchase means you have to understand the medley of human motivations, desires, and frustrations which is impossible without A/B testing. You can test your messaging, UI elements, or overhaul the checkout process—all in real-time.
In some cases, you may not be able to make sweeping changes to the cart and checkout flow because of limitations that your e-commerce platform imposes but that is no reason to abandon ambitious tests. You can always test smaller changes that give you a sense of the potential larger adjustments could have.
Jonny Longden, Conversion Director at Journey Further, recommends asking yourself one question:
What is the smallest/simplest thing we can test to start to prove this and to learn about it?
Don’t fall into the trap of “will test small changes only” or if you’ve made larger changes “the money’s already sunk, so we won’t test it.”
- Review mining can be turned into a science
Conversion research based on qualitative data is a staple in any kind of A/B testing but, in e-commerce, qualitative data like review mining can be turned into a science that helps you understand:
- Product USPs to highlight
- Benefits that you can test in your copy
- How customers perceive competitors
- Copy angles for product stories
- Pain points you have addressed
- Unaddressed pain points that cause cart abandonment Lorenzo Carreri, CRO & Experimentation Consultant, recommends thinking like a detective. Just like a detective has to uncover the story behind a crime, you can use reviews to unveil a lot of stories.
People already made a decision about buying and now without us bugging them with an exit poll or a widget, they’re actually organically sharing their experience.
In fact, Carreri’s pulse analysis for different industries reveals a common theme—people don’t tend to share insights about their on-site experience. So it doesn’t matter what question you ask or how you ask it.
But with review mining, especially on Amazon, people tend to share their insights. The more insights you gather, the more meaningful your data becomes, which helps you form a better hypothesis for testing.
- No dearth of traffic for e-commerce
A significant hurdle with A/B testing is not having enough traffic which means results can be biased.
But this isn’t a problem for e-commerce stores. A 7 figure Shopify store easily gets hundreds of thousands of visitors, but a series D B2B company would probably get ¼ of that traffic.
Why Should Shopify Entrepreneurs (Seriously) Consider A/B Testing?
E-commerce is ripe for A/B testing. The potential to see results quickly with a large pool of visitors and a lot of room to play around with is reason enough to adopt a culture of A/B testing.
But perhaps you’re not there yet. An increase in your traffic right now yields a revenue boost.
The question is, how long can you keep that up?
More traffic ≠ more revenue beyond a point. That path requires you to spend more on ads while simultaneously eating into your profit margins with discounts.
And when you look at e-Commerce giants like Amazon, eBay, or Etsy, you’ll notice they have A/B testing baked into their DNA. It’s the very reason they thrive. Not to mention, it’s the common thread all successful Shopify stores share.
It’s easy to understand why A/B testing propels growth. Look at how granular the tests Amazon runs are:
Don’t fall into the trap of “will test small changes only” or if you’ve made larger changes “the money’s already sunk, so we won’t test it.”
Conversion research based on qualitative data is a staple in any kind of A/B testing but, in e-commerce, qualitative data like review mining can be turned into a science that helps you understand:
- Product USPs to highlight
- Benefits that you can test in your copy
- How customers perceive competitors
- Copy angles for product stories
- Pain points you have addressed
- Unaddressed pain points that cause cart abandonment Lorenzo Carreri, CRO & Experimentation Consultant, recommends thinking like a detective. Just like a detective has to uncover the story behind a crime, you can use reviews to unveil a lot of stories.
People already made a decision about buying and now without us bugging them with an exit poll or a widget, they’re actually organically sharing their experience.
In fact, Carreri’s pulse analysis for different industries reveals a common theme—people don’t tend to share insights about their on-site experience. So it doesn’t matter what question you ask or how you ask it.
But with review mining, especially on Amazon, people tend to share their insights. The more insights you gather, the more meaningful your data becomes, which helps you form a better hypothesis for testing.
A significant hurdle with A/B testing is not having enough traffic which means results can be biased.
But this isn’t a problem for e-commerce stores. A 7 figure Shopify store easily gets hundreds of thousands of visitors, but a series D B2B company would probably get ¼ of that traffic.
Why Should Shopify Entrepreneurs (Seriously) Consider A/B Testing?
E-commerce is ripe for A/B testing. The potential to see results quickly with a large pool of visitors and a lot of room to play around with is reason enough to adopt a culture of A/B testing.
But perhaps you’re not there yet. An increase in your traffic right now yields a revenue boost.
The question is, how long can you keep that up?
More traffic ≠ more revenue beyond a point. That path requires you to spend more on ads while simultaneously eating into your profit margins with discounts.
And when you look at e-Commerce giants like Amazon, eBay, or Etsy, you’ll notice they have A/B testing baked into their DNA. It’s the very reason they thrive. Not to mention, it’s the common thread all successful Shopify stores share.
It’s easy to understand why A/B testing propels growth. Look at how granular the tests Amazon runs are:
But when it comes down to it, A/B testing isn’t just a way to stay competitive—it’s a good business decision.
Why? Because your current strategies are probably not working in your favor.
- Your ROAS is plummeting thanks to iOS 14
You’re probably spending more money than before trying to get eyeballs on your product, but the post-ATT world has messed with the way pixel-based conversions work. And retargeting and lookalike audiences? They’re not as effective anymore. On the off chance you get some conversions, be prepared to deal with discrepancies between the Ad Manager and your Shopify backend.
- Your open rates are skewed
Email numbers are no longer accurate. Mail Privacy Protection (MPP) has made sure of that. And your engagement-based lists may have questionable targeting and lower conversions.
- Your equation is missing retention
Chasing cold traffic is a bad business move. 40% of your revenue comes from loyal customers. Traffic gets buyers into your funnel but retention increases the lifetime value (LTV) of these buyers.
- Your marketing attribution sucks
Tools can’t give you any usable data and your team can’t attribute revenue to specific changes. You can’t push all the buttons hoping to see growth. You need to get specific or building 8 figure business is out of the question.
A/B testing flips the old playbook on its head and gives you the chance to use a scientifically valid approach that is repeatable, reliable, and profitable.
Here’s why OLIPOP, a DTC soda alternative company, stands by A/B testing:
A/B testing improves content engagement, reduces bounce rates, increases your conversion rate, and minimizes risk, all while providing data that’s easy to analyze. By running an A/B test, you’re able to figure out which content resonates with your target audience. You can then use this data to influence your marketing strategy. These tests also help you identify irrelevant data and areas where your users are experiencing difficulties on your website, thereby reducing your bounce rate once you make the necessary changes.
Once you can identify the variation that improves your customer experience, you’ll see an uptick in the time users spend on your site, leading to a higher conversion rate. Lastly, A/B testing minimizes risk because you make decisions based on accurate data instead of educated guesses. It allows you to make minimal changes without compromising your entire website. Your ROI will increase with A/B testing.
Steven Vigilante, Head of New Business Development of OLIPOP
Make small (or large) changes easily
Optimization, the science of making things better, is easy with A/B testing. You can introduce changes to find the version that creates a better purchase experience and converts some of your PPC traffic.
Reduce the cost of failing
The cost of failing is sometimes far too great and unsurprisingly inhibits innovation. But with A/B testing, you can test your ideas in a controlled environment without having to build or implement anything.
Peek into the future
Nothing can guarantee success. Not your gut instinct, agency suggestions, or even solid competitor research. But if you want to make data-driven decisions, A/B testing is your friend. The best versions in a test are not chosen based on statistical validity allowing you to get a glimpse of your revenue potential.
Leave little room for misinterpretation
A/B testing allows you to truly listen to your audience by collecting data of the changes on conversion rate, cart abandonment, average order value (AOV), revenue, and profit.
Instead of guesstimating the effects of your changes, the results are transparent and leave little room for misinterpretation.
Issues with A/B Testing on Shopify (+ Solutions)
While you’re contemplating making A/B testing central to your strategy, it’s important to address the potential issues you may face when running A/B tests on Shopify.
Problem #1: Shopify’s anti-clickjacking can interfere with your mobile QA
Clickjacking tricks users into clicking on actionable content on a decoy site. To prevent this from happening, Shopify uses anti-clickjacking tech. But it hinders A/B testing tools from performing optimally.
Problem #2: Testing isn’t an issue, but implementation is
Implementing the results of a test isn’t something an app or plugin can do—it requires customization. Even if you do find plugins that work for you, too many of them can slow your site down which effectively nullifies the potential gain.
Problem #3: You have a standard Shopify store limiting what you can test
Standard Shopify stores cannot access most of Shopify Plus features which means you can’t run tests like split testing the themes. Lower complexity tests result in a smaller impact on your revenue.
Solution: Spring for Shopify Plus.
A Quick Guide to Basics of A/B Testing
Now that you’ve wrapped your brain around A/B testing, it’s time to get into the nitty-gritty.
Pause for a moment and answer yes or no to these questions before you scroll down to see the answer.
- A/B testing is the same as split testing
- A/B testing and multi-variate testing are different
- You can only make minor tweaks with A/B testing
- You don’t need to learn statistics to run A/B tests
- You cannot run A/B tests on other channels
- You should stop A/B tests once you see results
A/B Testing vs. Split Testing
With A/B testing, you can test one or more elements on a page. You basically create a similar version of the original page to see the impact on the conversion rate.
Split URL testing is different from A/B testing. Traffic is split down the middle and sent to two completely different versions to see which webpage helps you achieve your specific goals.
When to Run Split Tests vs. A/B tests: Theme Testing
A great example of when to choose split testing over A/B testing is when you want to test Shopify themes. Your theme can impact the CX and ultimately revenue, so it’s essential you test it using a tool like Convert’s split URL option.
Convert employs Frequentist Inference to understand which theme outperforms the other. We recommend running this kind of test for at least two weeks unless you have unusually high traffic coming to your site.
P.S. You can only test themes if you’re a Shopify Plus user.
A/B Testing vs. Multivariate Testing
In A/B tests, you’re pitting nearly identical pages against the original.
Instead of changing one element at a time, like in A/B tests, multivariate testing is a process wherein you test multiple changes in a single test. The goal of multivariate testing is to find out which combination of changes yields better results.
Examples of A/B Tests To Run on Shopify Stores
Ask the internet what you should A/B test, and you’ll often be told to try a different CTA or a button color or change a headline.
Not that it’s unimportant, but the world is your playground, and you’re only playing in your own little sandbox if you limit yourself. Thinking outside the box is crucial to the spirit of experimentation.
We reached out to 8 Shopify entrepreneurs and asked them this:
What A/B tests have you run, why did you choose to conduct this experiment, and what were the results
#1. Boosted AOV, orders slightly down
We use Shopify across all of our online stores and have been testing bundling, or grouping, our products to increase AOV. The test is a cart that has upsells, or bundles, vs a cart that only contains the initial product. The results aren’t fully in yet but so far it looks like AOV has increased while the total number of orders has dipped slightly. We’ll run this for a few more weeks before doing a full analysis and could test other configurations to try and generate improvements in both AOV and conversions.
Sylvia Kang, Mira
#2. Optimized Every Site Element for CX
As a Shopify business, we’ve run a multitude of A/B tests, for features like live chat, CTAs, product images, upselling placement, landing pages, navigation menus, and more. For example, our A/B testing helped us find the balance of cross-selling/upselling without irritating consumers or adding friction to their experience.
Through numerous tests, we discovered our audience valued highly-relevant suggestions directly on product pages rather than offered during checkout, and in doing so, we upped average purchase value. A/B testing is crucial because it allows you to pinpoint exactly what features perform best and offer the highest returns without wasting time and energy implementing any elements that aren’t optimal. These tests provide you with accurate data regarding what design choices best suit your audience, and a stronger user experience is how businesses achieve growth and longevity.
Stephen Light, Nolah Mattress
#3. Used Session Replays to Include Videos for Better Results
One of the most important aspects that can make or break a conversion is how easy it us for a user to navigate your store and make a purchase. With A/B testing on session replays, we managed to see how real users with the intent to buy navigated through our store, where the problem was, what frustrated them, what made them stop during the process and prevent them from making a purchase. We realized that listings that included video of the process yielded better results and images with poor quality or not showing several images led to hesitation.
Michael Nemeroff, Rush Order Tees
#4. Lifted Conversions by 2% with Design Changes
In this A/B test, I wanted to see how a new layout could affect the conversion rate of my Shopify store. The original site had been running for six months and was converting at 3% so it seemed like time to try something different. My design change included moving product recommendations below the fold on mobile devices instead of in-line with products as well as removing banners from top navigation since they weren’t being clicked anyway. This resulted in an instant lift in conversions by 2%.
Jar Kuznecov, Water Softeners Hub
#5. Increased Relative Clicks by 14% by Changing the CTA Button Color
Though we’ve run a multitude of A/B tests over the years, one of the most effective tests we’ve performed was also the simplest: changing the color of our CTA button. That’s it. I had heard from a friend that by switching the color of his on-page buttons he had increased his response rates by 16% (relative to the number of clicks he was getting previously). This got me thinking, and I decided to run our own A/B test. In fact, it was actually an A/B/C test, as we tried 3 different colors – our original green color, as well as orange and red. The result? The red button yielded an 8% higher response rate, while the orange button gave us 14% better results in terms of relative clicks. It’s amazing that a change as simple as making a green button orange can have that profound of an effect. Thus, my best advice is when you’re trying to get someone to add a product to their cart, don’t just breeze by the color of the CTA button. Give it some serious thought – and testing.
John Ross, Test Prep Insight
#6. Increased CVR and AOV with Sticky Add-to-cart & Post-sale Upsells
A/B testing is a double-edged sword. It sounds nice to optimize your Shopify store and increase the conversion rate. But you need to know what each A/B test is adding a layerof complexity and using your resources. What to test is as important as how you test.
I’ve tested different ordering of product photos. Each time, I’ve found that the simplest image always converts the best. On product pages, your customer needs to understand exactly what your product is without having to think.
A sticky add-to-cart is a known winner. Having the button also on screen, within reach, was an easy 8% boost to my CVR.
Don’t forget about post-sale upsells. It was easy to increase my average order value from $24 to $40. You’d be surprised how easy it is to sell more to people that are already buying.
Matt Phelps, CRO Specialist and Founder of STEEL.
Feeling Inspired? Here are 20+ elements A/B test beginners can play around with on their e-commerce website:
- Offer free shipping
- Hero images vs. carousels
- CTA size
- CTA color
- CTA placement
- CTA copy
- Human images vs. no images
- Headline copy
- Font size
- Line height
- Personalization vs. none
- Back-in-stock notification
- Benefit-driven product descriptions
- Expert tip on the product page
- Highlighting discounts and offers
- Single vs. multi-page checkout
- Support during checkout
- Simple navigation menus
- Quick product view
- Product videos
- Upselling vs. cross-selling
- Tags on preview images
- User generated content
From the list of elements you can A/B test, it’s evident that product pages are the best place to get started. But in case you want to go down the rabbit hole of Ecom test ideation, we have just the conversation for you. With leading CRO expert Johann Von Tonder of AWA Digital.
But other pages on your site are also perfectly viable candidates for A/B testing.
Let’s look at which pages you can put to the test with some real-life examples from brands:
- Home Page
- Salty Captain changed the color of the announcement bar on their home page and got 234.54% more clicks and boosted CVR by 13.39%
- Legendary Wall Art experimented with the hero section and the CTA copy and increased their engagement by 325.39% and revenue by 30.07%
- byBiehl added a slider to showcase their important products resulting in increased category page visits (5.87%), revenue per user (3.25%) and CVR (19.73%)
- Category Page
- Copycat Fragrances added their version of Instagram’s Stories on their category pages increasing engagement by 4% and revenue per user by 18%
- Iceshaker switched up their category page to include their product story addressing common objections and got a 15.95% lift in conversions.
- Oliver Cabell focused on their user’s mobile experience modifying the layout and improving the design which resulted in a 14.86% lift in traffic and increased checkout page traffic by 5.49%
- Checkout Page
- Oflara recommended other items to shoppers when they were checking out with an Add to Cart button resulting in a significant improvement in overall revenue.
- Conscious Items removed friction from the checkout process with a sticky cart resulting in a 10% increase in revenue per user and a 10% increase in CVR.
- Homeware noted that users only bought one item on their Shopify store. So they simplified the checkout process to redirect users to the checkout page directly resulting in a 47.7% increase in CVR and a 71.4% increase in revenue per visitor on mobile.
Expert Tip: Focus on large changes
My best advice for first-time entrepreneurs conducting A/B testing for the first time is to focus on large changes. For example, a complete redesign of a product page. Small changes like changing button colors are unlikely to move the needle in a significant way.
By doing a complete page redesign and adding product explanation gifs to our product pages, we were able to increase the conversion rate by 40%.
Philip Pages, Founder at PostPurchaseSurvey.com and a mid 7 figure e-commerce Shopify brand.
Stats Concepts To Be Familiar With When You Run A/B Tests
Although A/B testing is used to compare two versions of your website, only looking at the numbers isn’t useful since that fails to take into account the statistical significance of the data. You’ll end up misinterpreting the results and hurting your sales.
So whether your in-house team is running point on the project or you hire a CRO agency, it’s important you familiarize yourself with A/B testing stat concepts you’ll hear a lot of.
Sample and Population
All the visitors landing on your site are deemed population while a sample is the number of visitors that participate in an A/B test.
Mean, Median, and Mode
Mean = average
Median = value in the middle
Mode = repeated value
Variance and Standard Deviation
Variance is average variability of the data. The higher the variability, the less precise the mean is as a predictor of an individual data point.
Standard Deviation is the square root of Variance and is expressed in the same units as the original values making it intuitively easier to understand. On the other hand, Variance is expressed in the square of the original unit but is still important to the outcomes of your A/B tests.
Statistical Significance
When an A/B testing dashboard says there is a “95% chance of beating original” or “90% probability of statistical significance,” it’s asking the following question: Assuming there is no underlying difference between A and B, how often will we see a difference like we do in the data just by chance?
Evan Miller, Statistical Software Developer (Source)
The significance level needs to be as small as possible. 1% is ideal, as it is equivalent to a confidence level of 99%. And insignificant results might mean what you’re seeing is actually a false positive, so it is important to wait for statistical significance, but not only that.
You need to calculate a sample size that matches a minimum lift of your choosing (MDE – Minimum Detectable Effect), you will have an increased change of making a false positive.
P-value
The p-value is the probability of obtaining results at least as extreme as the observed results of a statistical hypothesis test, assuming that the null hypothesis is correct.
But what you really need to know about p-value is this: “How surprising is this result?”
How Long Should You Run an A/B Test on a Shopify Store?
There are two common fallacies you’ll run into often:
- End the A/B test when you reach statistical significance
- Monitor the p-values and declare the winner as soon as you hit the target.
Stopping a test should be based on sample size. But while you shouldn’t end your experiment early, they shouldn’t run forever. If after 3 months, you still have not reached significance, it’s best to try other changes on your side, preferably bolder ones.
Convert and Shopify recommend letting your tests run for at least two business cycles or 14 days.
Avid Faruz, CEO of Faruzo agrees:
New entrepreneurs need to know that in A/B testing, the timeframe matters a lot. The longer you run your A/B tests, the more accurate tests you’ll get. This is because your tests will use more data points to derive results. Experienced marketers run their tests for as long as two weeks. I would advise all marketers and entrepreneurs to set a timeframe according to the level of traffic their websites get.
That is the reason our platform offers a 14-day free trial so you can test your hypothesis.
4 Step Process to Run A/B Tests on a Shopify Store
Ready to run tests?
Use this 4 step A/B testing process to build better tests and understand their impact.
#1. Conduct Qualitative and Quantitative Research
Conversion research is the first and most important step. This allows you to build hypotheses that you can A/B test. Also known as the discovery phase, this is when you put your operating assumption at rest and let data guide you.
You’ll end up with two kinds of data–quantitative and qualitative.
Start with gathering quantitative data. These make up the cold, hard facts that you can’t argue with that analytics engines like Google Analytics, Amplitude, or Mixpanel can spit out.
For instance, you may want to look at bounce rates, the total number of conversions, or pages viewed/session.
Once you’ve racked up the quantitative data, fetch qualitative data. Since this is subjective, there is potential for subconscious biases to creep in but interpreting your findings is the only way you can answer the “Why.”
Use Hotjar to generate heatmaps and record visitor sessions. The answers you may find aren’t definitive, but it introduces new possibilities contributing to a better hypothesis overall.
But before you jump into that, it’s important to look at both qualitative and quantitative data in tandem to have a holistic understanding. Analysis equal data querying and critical thinking.
#2. Create Credible Hypotheses
Following the scientific method means you have to create a credible hypothesis—a proposed solution whose validity requires evaluation.
Matt Beischel, Founder of CorvusCRO, shares the 3 main components of a hypothesis: Comprehension, Response, and Outcome.
Here’s an example of what that would look like:
- Comprehension: We have observed a reduction in multi-item purchases by comparing the last 6 months of purchase data.
- Response: We want to promote paired products with an inline upsell at the cart page on mobile phones for returning users with an item already in their cart.
- Outcome: This should lead to single item purchasers more easily finding and purchasing complementary products, which will be measured by average order value (AOV) and backed up by average order size, multi-item purchase count, order conversion, and revenue.
To help you simplify and standardize the creation of hypotheses, we’ve got an A/B testing hypothesis generator.
At this stage, you also want to understand your sample size and calculate a stopping point for the test based on that. Use our A/B testing significance calculator for that.
Expert Tip:
Once you know your sample size and how long you should run your test, you need to set your testing priorities. You can choose to test different parts of the process like a single page, a whole website, popups, or paid ads. It’s best to focus on one part of the process at a time, so you can get clear answers about which changes are leading to improved customer experience and conversion rates.
Allan Borch, Founder of DotcomDollar.com
Prioritize Your Hypothesis
Experimentation has a ton of upsides which is why you’ll often see experts who advocate for testing everything. However, you have to prioritize which tests you need to run now and which experiments can wait because resources are limited no matter how small or large your company is.
So experimenters fall back on prioritization models like RICE, PIE, ICE, or PXL. But David Mannheim, Personalization Consultant, suggests that these models are flawed:
They lack alignment to the wider context of the business. Prioritization should be top-down, focussing on the business mission first, business objectives second, and so forth. Most prioritization models focus on the ‘execution,’ i.e., the very last thing within a triangal-y-hierarchy-diagram-thing of execution on the base, concept, user problem, product objectives, business objectives and mission at the top.
These models also use “effort” as a scoring factor which means you’re really holding back from building features that potentially have the most impact because they’re complex. Ultimately, these models lack objectivity.
Andrea Saez, Senior Product Marketing Manager at Product School, says,
There is no way you can know the reach, impact, or effort on most things without properly having vetted if you’re even working on the right things, even less if you haven’t spoken to anyone about it. So how could you possibly have any confidence?.
The answer here is to build your own prioritization model.
Step 1: Get inspired by examples
Step 2: Account for factors like alignment with business objectives, iteration potential, company-specific learning, and resource investment.
Step 3: Assign weightage to tests you want to run
Step 4: Rinse and repeat till you find an acronym that works for you.
#3. Deploy the Test
You’ve got your research in place and built a credible hypothesis. Now it’s time to go to bat.
Successful deployment requires 3 things—the right A/B testing platform, the right team to code the tests, and QA and debugging.
Let’s start with the first one.
What makes for a good A/B testing platform for Shopify?
Ideally, you want a single tool that lets you test themes, pricing, menus, product collections, search pages, run multivariate tests, and track revenue.
Many plugins can help you achieve one or more of these things, but we already know plugins cause code bloat, which isn’t good news for your SEO or conversions.
A dedicated testing platform like Convert Experiences seamlessly integrates with your Shopify store, lets you run all kinds of tests you want, and has a custom Shopify A/B testing app you can use, eliminating possible code bloat.
Next, you want to have the right team in place to code the tests.
Note: There is a difference between coders and coders who works with A/B testing teams.
Ultimately, testing is incomplete without QA and debugging. Without QA, variation errors can crop up, causing statistical errors—a false positive or a false negative. Not to mention, you may end up collecting the wrong data that delivers zero value to your visitors.
Here are 4 best practices for QA of A/B tests:
- Develop a QA strategy
- Identify what to QA
- Focus on page experience
- Align QA with conversion goals
Pro Tip: Avoid these rookie A/B testing mistakes:
- You only test industry best practices
- You keep peeking at your “results”
- You give up after one test
- You fail to iterate and improve on wins
- You mess up revenue tracking
#4. Analyze & Learn From Your A/B Tests
Whether you have a winner or loser on your hands, analyzing what worked and learning from it to influence future A/B tests is crucial.
Because while A/B testing is a strategy to boost your revenue, you’re also effectively “buying data” on your audience.
Here’s a 7 step process to learn from A/B tests –
- Make sure your data is accurate, valid, and significant
- Check your micro, macro, and guardrail metrics
- Segment your results
- Check user behavior
- Continue to improve on winners
- Create a learning repository for future tests
The last step allows you to run tests in the future that are backed by your previous experiments’ learnings.
Expert Tip: Be prepared to fail.
It is difficult to predict the conversion rate for your website even if you think you’ve created the perfect A/B test. As a new entrepreneur, I almost succumbed to the frustrations of seeing no success in the first few months. I am not used to failure and many entrepreneurs are like this. The focus should be to give the users the best experience and leave room for the unexpected.
Leslie Radka, Founder & Hiring Manager at GreatPeopleSearch
A/B Testing in Other Realms That Can Compound Your Shopify Store Gains
Don’t stick to just your website. A/B testing can and should be applied to other channels and realms where customer engagement occurs.
A/B Testing Pop-Ups (with Privy)
Those pop-ups you have on your website? You can A/B test them too with tools like Privy. Experiment with your headline, offer, form, CTA, or images.
Privy’s Convert tool allows you to present the pop-up in different formats and target visitors based on rulesets.
A/B Testing Emails
When it comes to email marketing, 3 core areas of improvement emerge—delivery, open rates, and CTR.
You can test your emails in this order:
- First, the subject lines to improve the open rate
- Then the body copy to make sure it’s relevant
- Finally, the CTAs to get more clicks
What else can you test in your email? Check out our complete guide to A/B testing emails.
Here’s how 2 Shopify entrepreneurs used A/B testing to grow their email marketing channel:[h5] #1. Grew Email List 3x Using Split Testing
The most effective strategy for testing content is A/B testing. A/B testing has proven, measurable, immediate results that tell us whether one or another content base is more effective at converting customers to sign up for emails, make a purchase, etc.
In retail, vanity metrics like direct traffic to your website are least effective for measuring content success, while A/B testing (i.e., tracking conversion rate, user engagement, email funnels) is the most effective. We tested our email subscription CTA with split-testing and grew our email list over three times in one campaign. The better you know your ICP, the more effective your brand strategy will convert. Use A/B tests to understand your target demographic better, and spoon-feed them the content they respond best to.
Zach Goldstein, Public Rec
#2. Increased email open rate by 25% with emojis in subject lines
After seeing a study, I wanted to test out open rates using an emoji in the subject line vs. not using one. The study implied that using an emoji would help to enhance open rates, but I felt that it could come off as unprofessional and spammy.
I use the ActiveCampaign email platform alongside Shopify, and I actually integrate the two together to maximize customer communication. ActiveCampaign allows users to run many A/B tests so they can see what jives with their target audience. When the results were in, I had to admit that I was wrong because the emails with an emoji in the subject received a 25% higher open rate. It’s safe to say that I’ve been pretty liberal with my emoji keyboard ever since, and I’ve noticed a spike in conversion rates, too.
Stephanie Venn-Watson, fatty15
A/B Testing on Social Media
Like paid ads, you can test your organic content on social to improve engagement. The heading, copy, images, and CTA can all be A/B tested.
When doing this manually, stagger the release of your posts to have a reasonable gap which will allow you to gather meaningful data.
Or you can use scheduling tools like Later, Buffer, or MeetEdgar to automate the publishing.
Ecommerce A/B Testing Pitfalls to Avoid
Our need for instant gratification also seeps into A/B testing. Jon Ivanco, Co-founder of Formtoro, believes most A/B testing is reactionary:
Brands want a quick fix that’s cost-effective; they hate the idea of investing in long-term outlooks and gains. The only time they look at these things is when things aren’t going well.
There are “experts” that are anything but experts, bad advice presented as best practices, and experiments designed to pick the low-hanging fruit.
Ivanco instead recommends getting the basics right:
– All tests to landing pages
– All tests from specific audiences
– Test one variable at a time
– Don’t test unless you have a clearly articulated hypothesis and you can learn from if something goes right or if it fails
– Do all tests from the perspective of the customer journey
– Small things are part of a larger chain, try to isolate things as much as you can one step at a time
Give Privacy a Thought
No one wants to become a lab rat inadvertently.
The backlash to Facebook 2014’s emotional contagion study is proof. Even Apple’s privacy updates signal that users care about their privacy and don’t want to be manipulated into buying products.
Laws around privacy—existing and upcoming—will continue to evolve. Each time a significant change is brought about, it will hurt your business unless you start thinking user-first and bake ethical A/B testing into your strategy.
So what does that mean for you?
- Take data privacy seriously when collecting data
- Rule out manipulative tactics
- Store and process data securely
- Respect user consent and allow them to opt-out of experiments
Do that and you will future-proof your A/B testing and build a better relationship with your audience.
Originally published April 12, 2022 – Updated March 28, 2023
Mobile reading?
Authors
Sneh Ratna Choudhary
Sneh is a B2B SaaS Content Marketer & Strategist for brands that want to create quality content that converts.
Editors
Carmen Apostu
In her role as Head of Content at Convert, Carmen is dedicated to delivering top-notch content that people can’t help but read through. Connect with Carmen on LinkedIn for any inquiries or requests.