If you’re like me, you’ve received that question more times than you can count. Also if you’re like me, your answer to this question has become a lot more subjective over the years.
In my last article for Search Engine Journal, I said that “it depends” is actually a responsible and appropriate answer to most marketing questions. I think the same applies to most SEO questions.
I remember my feelings of frustration after I’d run an SEO experiment. I would apply the same change to multiple websites, only to get a positive result on some and a negative on others.
It was completely deflating — did I do something wrong? Did some other change throw off the experiment?
I would spend hours trying to find an explanation for my inconclusive results, only to throw up my hands in defeat, resigning myself to the depressing belief that SEO would forever remain an enigma.
It wasn’t until much later that I realized, although SEO was much more nuanced than I had originally thought, it wasn’t impossible to figure out.
The solution, in my mind, is to consider SEO as subjective, in that different pages need different factors to rank for different queries.
That’s not to say there aren’t best practices we should follow (like having an up-to-date XML sitemap) or confirmed ranking factors (like links or mobile-friendliness). It’s just that the degree to which these are effective for producing our desired result will vary by factors like the size of your website, your competition, your industry, and even the time of year.
For example, Tom Capper’s presentation on the two-tiered SERP showed how volatile the page 1 SERP was for the term “Mother’s Day flowers” in the two weeks leading up to Mother’s Day.
And Botify’s data shows how crawl budget optimizations can have substantial ranking and traffic benefits on large sites while making little-to-no difference on smaller sites (disclaimer: I work for Botify).
We need to get comfortable with nuance if we want to be effective SEOs.
Rob Ousbey recently gave a great presentation at MozCon 2019 on this very topic.
He explained that he and the team at Distilled would set up SEO A/B tests (note: these are different than A/B tests for CRO. The former splits pages into groups while the latter splits users into groups) to see, for example, whether removing content on product category pages would help or hurt their organic search traffic.
What did they learn?
The same change led to improved traffic on some sites and decreased traffic on others.
Data like this is what tells us that no change (even if that change is considered “SEO best practice”) has the same impact on any two sites.
So what are we supposed to do?
We know that Google wants to serve the most relevant answer to searcher’s queries. We also know that “relevance” is a subjective quality. So if you want to find what works for your unique site, you’re going to have to test.
And what better way to test our SEO theories than the scientific method?
What SEO mystery do you want to get to the bottom of?
Documenting your observation or question can help keep your SEO experiments on track. Try to keep a single focus, testing one thing at a time.
If you’re at a larger organization where you need to get executive buy-in before you can run experiments, it’s a good idea to use your own website’s data to point you in a direction where the likelihood of positive impact is high.
If your existing data indicates that your highest-ranked pages are those with a low page depth, you can use that to make a case to your boss.
For example, “Our data indicate that low-depth correlates with better rankings. We’d like to confirm that hypothesis by running an experiment where we reduce the depth of low-ranking pages. If it’s successful on our test group, we’ll work on improving the depth of all our key pages.”
When you do this, your boss is much more likely to approve your experiment.
Next, you’ll want to research your topic. Look for any existing documentation on the subject – Google Webmaster blog posts, Google patents, third-party research, etc.
Research will help you narrow down to the most realistic hypothesis.
What is your educated guess to explain your observation?
The rest of this process will seek to prove or disprove your hypothesis.
Now onto the hard (or fun, depending on how you look at it) stuff. It’s time to run an experiment.
It can be difficult to run an experiment and get clean results.
For example, how do you know if the traffic increase or decrease was caused by your test, and not an algorithm update? Or seasonality? Or some other change made to the website at the same time?
One good way to solve for this is to apply your change to multiple, similar pages on your site, while leaving a second group of pages the same – a test group and a control group.
Not only does this help give you more conclusive results, but it also ensures you’re not wasting your time rolling out a bad or neutral change site-wide. You’ll end up only spending time on changes you’re confident will work in your favor.
Now it’s time to analyze your data so that you can draw a logical conclusion. You’re essentially trying to uncover, based on the data, whether your hypothesis was right or wrong.
If you ran your experiment like an SEO A/B test, for example, improved metrics on the test group relative to the control group will tell you that your hypothesis was correct.
But what metrics should you be looking at?
Not all metrics are created equal, even in SEO where keyword rank position seems to reign supreme.
Every metric measures something different, so make sure you’re picking the ones that most directly measure the specific work that you did.
For example, if your hypothesis was “reducing redirect chains on our internal links will improve Google’s crawl of our site” then the metric you’ll want to use is crawl ratio (how many of your pages is Google crawling vs. missing), which you’ll want your log files for.
This isn’t to say that each SEO activity only impacts a single metric – not at all!
Even technical changes like crawl budget optimizations can have a positive impact on your rankings and traffic. It’s just always a good idea to pick the most direct measures of your activities in order to determine whether the experiment was a success or failure.
Finally, it’s time to report on your results.
This is an important step, because the conclusions you draw can affect how other people think about SEO.
Here are some tips to keep in mind when publishing your results:
There are plenty of “SEO best practices” and “ranking factors” lists out there – too many to count.
While these listicles might offer you the temporary relief of thinking “If I just follow this list, I’ll be successful!” they disappoint in the long run.
Following a checklist is simply not sufficient for SEOs living in the era of the modern web and a search engine that learns faster than we ever could.
We need to embrace a “test everything” spirit to see what works and doesn’t work to improve key SEO metrics (not just rankings!) on our own unique websites.
There are likely even variations in what works and doesn’t work within a single website! For example, your product pages might need much different treatment than your blog pages or your forum pages.
Using the scientific method on your website is a much more definitive way to reach educated conclusions about what works and what doesn’t.
Now go out there and be the best SEO scientists you can be.