SEO A/B Split Testing 101

By | January 5, 2021

Experimentation is at the heart of digital marketing. Whether testing different ad formats or performing CRO with landing page designs, A/B tests allow you to validate large-scale changes and enhance your conversion funnel. 

A/B Tests are the way to go – by setting control and variation, you can measure the estimated impact before scaling up changes. But when it comes to SEO, it’s not as easy as it sounds.

Why? The Reason for SEO A/B Testing

SEO can be tricky. In order to maintain & improve its dominance in the search engine market, Google is constantly tweaking its algorithm – making the art of SEO a game of constant adjustments. 

There were several core updates in the past year that generated significant buzz in the industry. Still, the reality is that Google makes, on average, multiple changes to its search algorithm every day ( totaling in the thousands each year).

This dynamic nature of search makes your pursuit of higher SERP visibility & click-through rates that much more difficult. Every search market and every domain is unique. Therefore, testing different meta-descriptions, title formats, structured data, and page layouts (just to name a few) is necessary to uncover your recipe for optimization. 

Another challenge of SEO is the uncertainty of the outcome of any campaign or test you’ll run. While your hypothesis may seem sound, there’s no guarantee that it is valid. 

For example, sitewide changes can have extensive negative consequences: potentially dropping your visibility & CTRs, setting your business back significantly, and wasting serious time and money.

So, you have to treat SEO like scientific research – using the scientific method to formulate a hypothesis, test it, analyze the results, and make informed decisions from there.

By first running controlled experiments on desired changes, we can transform any potential failures into proven or rejected hypotheses, which we can learn from and use to modify our approach and expectations for future experiments.

The Problem – Traditional A/B Tests Don’t Work for SEO

With traditional CRO (conversion rate optimization) and UX (user experience) tests, you can simply split-test users by creating multiple versions of a page/element and randomly presenting either version to your audience. Then, you can gather the outcomes and run a simple Chi-Squared test to measure and validate the impact. 

But with SEO, Google’s bots add a new layer of complexity to the equation. You cannot simply present two versions of the same webpage to Google for indexing, so we have to get a bit more creative. There are two main approaches to this: 

  1. Before & After A/B Tests – measuring traffic/clicks/CTR for a period of time, making a change to the page(s), and then measuring these KPIs over the same time interval. 

  2. Statistical A/B Testing – randomly splitting pages with the same layouts/templates into either a control or variant group, using Javascript (or a software like Google Tag Manager) to implement the change to be tested on the correct group of pages, and then estimating the causal impact after a set time period. 

Unfortunately, there are several major challenges and shortcomings when using these methods for SEO. 

Proper experimentation requires isolation of a single test variable, and before & after tests simply cannot accomplish this. It’s nearly impossible to know if an influx of clicks was due to the applied change rather than a change in the market, your audience, Google’s algorithm, the SERP layout….the list goes on. On top of this, before & after tests double the timeframe needed to complete the experiment.

Statistical A/B testing is much more comprehensive and analytically sound; however, it’s also much more complicated and resource-heavy to conduct. 

The process of splitting test groups and setting up Javascript/GTM to apply the changes can be tedious and demanding, and estimating causal impact and statistical significance after is even more challenging. When tests require large counts of pages to be considered, this process becomes a lot less feasible. 

To sum it up: 

  • Before & After Tests are too simplistic to arrive at statistical significance

  • Statistical A/B Tests are too complicated and overwhelming

  • Both methods require significant time and/or resources to properly perform 

But…what if we could take the methodical & data-driven nature of A/B Split Testing and make it easy to perform for SEO? 

The Solution – SEO A/B Testing Tool With SplitSignal

SEMrush’s SEO A/B Testing tool (SplitSignal) allows you to easily design and execute SEO split tests with in-depth and automated statistical models to accurately estimate the causal impact of changes, while also granting the ability to scale successful tests to your entire site with ease. 

The solution is designed for efficiency and ease of use; you simply set conditions for a test, implement our Javascript snippet, and sit back waiting for the results of your experiment. 

SplitSignal: Create rulesSplitSignal interface: Test Creation Process

SplitSignal: Create rules

You can test any on-page elements like titles, meta tags, structured data, ad-block placements, alt tags, and more. Our analysis uses your existing traffic trends to build a baseline model that helps determine the estimated impact of changes while eliminating external variables & noise. 

This method yields confidence in the test results, whether positive or negative, allowing you to accurately estimate the impact of potential site-wide alterations.

 

Final Thought

Changes to Google’s ranking algorithm are inevitable, and there’s no secret recipe for increasing organic traffic. This is why experimentation is a necessary element of SEO; by running experiments before making a site-wide change, a potentially damaging drop in traffic/clicks is avoided, while also allowing you to test more variations. 

Unsuccessful tests are also fine, in fact, they’re at the core of the scientific method. A rejected hypothesis isn’t a failure; it’s a conclusion – something we can learn from and utilize to construct future iterations. However, without an easy way to streamline this experimentation, a lot of time and resources can be wasted.

image.png

image.png

 SEO A/B Split Testing 101. Image 2SplitSignal interface: Test Results

 SEO A/B Split Testing 101. Image 2

With SplitSignal, you can streamline experimentation on your site, mitigate the potential risks of large scale changes, and uncover some major wins for your organic traffic and visibility. 

How much has fear of wasted resources stopped you from taking risks with your marketing strategy? How do you justify the investment in resource-heavy changes to your site with the uncertainty of their impact? 

With SplitSignal, you can let your creativity flow without any anxiety and take your SEO to the next level. Don’t let loss aversion limit your brand’s marketing potential.

1Shares