Do Your Case Studies Blend With Advertisement Purposes?

Ammon Johns πŸŽ“
Experiments, Tests, and Case Studies
We get a fair few people wanting to post 'case studies' that sadly just don't actually work as case studies, and are really just infomercials for the company involved. We'd love to see more genuine case studies that can pass our guidelines against spamming the group members with advertising, so I thought I'd offer a few helpful tips.
This also ties in to the related thing of sharing experiments and 'test results' in a way that can actually help people in the community, and yourselves. Many of the same things apply, and it is my hope that in sharing some tips and guidance, we might see more testing and sharing of experiments here.
First, take control.
One of the things lacking in so, so many tests and so-called experiments is a control group or control data. If in a test you simply change something, and you don't have a 'control group' where you didn't make the change, then you cannot tell whether any change in results is due to the changes you made, or simply one of the continual shift in search patterns, trends, etc.
Without a control group, any change you saw may have happened anyway, and cannot be tied specifically to what you changed.
Without Method there is only madness.
Methodology is absolutely fundamental. Spell out exactly what you did in your test, study, or experiment, in such a way that it serves as complete instructions for others to run the same experiment and get the same results.
Without a fully detailed methodology you don't have a study, you have an anecdote and nothing more.
You want people to be able to point out flaws you may have missed so that YOU get better data and find the stuff that works faster. We all have biases.
Did you know that the whole list of cognitive biases so many cite today were originally made into a list by two guys looking at the flaws and biases in proper scientific studies? They happen, even to the smartest people. You need a third-party viewpoint to spot a lot of things other than just whether your ass looks fat in those jeans.
Always include a detailed methodology that would allow a junior you were training to use them as a full set of instructions to run the same test for themselves. It's a little extra work, but not only does it increase the credibility of your study immensely, it also allows people to help you refine your tests, or spot flaws in them, before they become embarrassing.
If I see a supposed 'study' without a decent methodology explaining how they were running the experiment, what controls they had, the sample size, and how they were accounting for other possible factors, eliminating coincidence, then I'm already 60% sure they are just making it up, or hiding how poor the 'study' actually was.
Providing Value
For a group like this, or indeed to earn links and shares online as an article, your study or test needs to have value to the reader, not just sound fancy to you the creator.
If you include the methodology, then just your thought process alone can be educational and useful to others. It can invite shares, commentary and feedback – what we call 'peer review'. Even a test with no amazing or unexpected results can become an exemplary proof of something not working as expected or predicted, and have value.
The main thing, in all cases, is that the reader should feel informed, that either they have information on something to test for themselves, or have confidence that you really did test it thoroughly and carefully. They may perhaps think of further refinements and cite your own test in the background methodology of their own.
Just saying you tested stuff, no matter what numbers you then throw around, without a methodology and control group, is just an opinion, not a study.
50 πŸ‘πŸ½17 πŸ’Ÿ67
30 πŸ’¬πŸ—¨

Steven πŸ‘‘
The issue with Search Engine Optimization (SEO) testing is an inherent nature of the battle between shadow marketing effort to benefit the tester and an attempt(?) to embrace scientific methodology regardless of the tester's intent. It's a hard sell as the testing data is perceived as a form of soft sell. Just my 2 cents.

Ammon Johns ✍️ πŸŽ“ Β» Steven
Sadly, yes, most of the people who shout the loudest about 'testing' want to seem scientific, without actually following any of the scientific principles. But there are some strong exceptions, and I'll try to call out a few examples, as per below.
Steven πŸ‘‘ Β» Ammon Johns
No doubt, many tests are done with the intention of presenting the facts but most people aren't qualified to test. Most data presentations are an opinion or a consensus among select SEO users at best.
Ammon Johns ✍️ πŸŽ“ Β» Steven
People are not qualified to write until they learn literacy (and even then, many need a lot more practice). I think the same is true of testing. I think almost anyone can learn the basics of scientific method and start to create some kind of value.
Sure, not everyone will have the resources for large-scale or complex testing, no question on that, but I think almost anyone can learn to do basic testing and simple but meaningful experiments.
Steven πŸ‘‘ Β» Ammon Johns
"almost anyone can learn the basics of the scientific method and start to create some kind of value"… that is true indeed. The problem is the test data can be skewed, and marketers are extremely good at this. It always boils down to how much I can make using the test data I create in the SEO space. So, all SEO test data should be consumed with extreme caution.
Ammon Johns ✍️ πŸŽ“ Β» Steven
Exactly. And that is precisely why the methodology is vital to me. It shows you exactly where someone has positioned the smoke and the mirrors, where they have forced a particular view or perspective, and, of course, why so many of the 'testers' make sure to omit it.
Natalie Β» Ammon Johns
Yes, I have noticed this. The "we can't share our special sauce with you unless you pay xxxx" which is fine if someone doesn't want to share the bread and butter of their business, but then the case study isn't a true case study but a thinly veiled sales pitch.
Alex Β» Steven
I think it goes a step further than this in that they intentionally skew the data. I agree with Natalie that if you charge to get the actual teaching then it's not a lesson it's a sales pitch. I get that there is an entire niche of people selling courses I just wish they were vetted a bit, also understand that it's hard to vet that which hasn't been proven.

Ammon Johns ✍️ πŸŽ“
One classic example that I think many of you would know is Brian Dean's original 'Skyscraper Technique' article, where the page about the technique is the case study, and where he spells out the methodology and shows the test itself, which was the 200 ranking factors page.
That article, not just telling what he'd done and how to do it yourself, but *showing* the example, everything you needed to run the test yourself, launched his career and fame.
Link Building Case Study: How I Increased My Search Traffic by 110% in 14 Days
Andrew πŸŽ“
Ammon you are such a good guy. Thanks for taking the time to post this.


Roger Β» Ammon Johns
I would like to suggest that for many SEO tests it's impossible to use a control, because you cannot simultaneously test a set of URLs and not test the same URLs.
A control would have to be between two different sets of URLs and we all know that two sets of URLs (with different content) are never equal, so the test would be inherently flawed and not accurate.
There IS a way to do it, but you would need a Python package called CausalImpact to do it. However, CausalImpact and Python are beyond my skills. Nevertheless, I would urge everyone with the skillset to investigate CausalImpact because it's a cool technology to do just this sort of thing.
CausalImpact can infer what the before and after difference was and can tell you what the net effect of something was in the absence of a control by creating an ARTIFICIAL CONTROL.
This is one of the coolest things I've seen in awhile and if you know how to use Python, then you're in business.
Inferring the effect of an event using CausalImpact by Kay Brodersen
Inferring the effect of an event using CausalImpact by Kay Brodersen

Ammon Johns ✍️ πŸŽ“
Ah, Roger, my dear friend, you quit too easy. πŸ˜‰ There are ways to get close enough, they just take more time and more work.
You're absolutely right that you often need to use multiple URLs and that that then adds in its own differences and variables. So you have to then identify all of those variables you can, and work out how to negate the differences.
In your example of needing to test a specific content thing, we'd need 2 URLs (at least, much more is better to get an adequate sample to remove pure coincidence), and those URLs introduce their own variables, such as whether they have the exact same links pointed to them (and the order of linking is something to account for), same length, etc.
To do this, one method is to switch which one has which content, (again, more than once to remove pure coincidence), because even if switching content introduces the new element of 'content that changes', that applies to both equally – they obviously both changed just as much.
Another would be to use 2 or more different subdomains with carefully similar and neutral naming conventions, and alternate the naming structures to again 'balance' out the difference.
The trick is not to eliminate all other variables, but to account for them and neutralize them.
Funny enough, I had a conversation with someone who does a lot of 'testing' recently (never with methodology) and was explaining how SVT (Single Variate Testing) had become harder over the years as more and more things had become potential signals or variables to negate. He disagreed telling me it was easy and he did it all the time. That told me a ton about his quality of 'testing'.
Roger Β» Ammon Johns
I have to respectfully disagree about creating a reliable control for understanding how Google responds to a test.
I think the major problem is that someone who is a statistician, an actual expert, isn't crafting these tests.
I think we'd have to bring in a statistician who can give us her opinion about the feasibility of it and take their word on it.
The only way that I see getting close to it is with the CausalImpact package, which is based on science.
Ammon Johns ✍️ πŸŽ“ Β» Roger
The old saying that "there are lies, damned lies, and statistics" has a solid reason both for coming to be, and for continuance. The fundamental and inescapable flaw of all statistics is that it measures the past to predict the future, but the future makes no promises to play along.
There's a wonderful line in a book that David Kutcher once sent me which it says is an old Arabic proverb: "That which happens once, can never happen again. But that which happens twice will always happen again". I adore that line, and it has serious application to statistics. Everything is as we know it to be, until suddenly we have an example where it isn't.
Imagine that I flip a coin ten times in a row, and it comes down 'Heads' all ten times. A statistical study reveals that when I flip a coin it always lands as 'Heads' because it uses the data that has been generated. Only the assurance of a physicist or engineers study that the coin has a 50:50 chance and isn't rigged changes that. Meanwhile, the average uneducated person often feels like knowing that, the coin is much more likely to come out 'Tails' on the next flip, where in reality, the coin has no idea it came out heads before, and still has the same 50:50 chance per flip that it always had.
In an informal study, test, or experiment, we are reliant on observational data, and simply doing as much as we can to eliminate the biases. We cannot 'prove' much, but we can gain evidence and data. We can prove that a thing is possible if we see it happen and have eliminated all other reasonable explanations. We cannot prove something is impossible the same way, because absence of proof is never proof of absence. We can only see that it *didn't happen in a real world scenario.
Knowing what one can and cannot test, and what significance any results really have, is part of why peer review is so important.
CasualImpact is not magic. It is simply looking for extra and alternate correlational data, and so should our studies whenever possible. Peer review draws upon a wider range of views and experience, increasing the chance of finding those other correlational data points. However, as my post is attempting to explain, without sharing detailed methodology that peer review is effectively impossible.

Well said! Most of the SEO case studies I see are written by marketers who do not understand scientific method. If there is not a robust methodology at the beginning then I stop reading.
Good post, just one correction, not every change you do you can have a control group, in that case we usually use T-test statistical model for those cases

Ammon Johns ✍️ πŸŽ“ Β» Oren
Absolutely. There are things we cannot absolutely 'test', and instead have to attempt to model. But for the purposes of SEO testing and case studies, I think we can *almost* always design a test or study that does have a control group.
I never claimed it would be easy, but I think it is almost always possible, and that going the extra mile, doing what is difficult, is where the value lies (and thanks to scarcity is likely to remain).

Ammon Johns ✍️ πŸŽ“
Just to note that testing doesn't have to be perfect. Sure, ideally it always would be. But what I mostly am driving at is that we all ought to be striving for far, far better than is currently common.
Glen (ViperChill) had a study on Twitter that I greatly enjoyed, even though in the end the experiment was a failure. What I loved was that he spelled out the methodology, had no smoke and mirrors about his sources, and finally, was really honest about the failings.


People Ignore Google own Rules About Backlinks Should Use a Rel of Sponsored

Leave a Reply

Your email address will not be published. Required fields are marked *