Members' Insights

SEO Mistakes: Horror Stories from Well Known SEOs

David Broderick
Last Updated: Sep. 25, 2024

Failures. Fuck ups. Facepalms.

We’ve all done ‘em. 

And what better way to overcome the SEO mistakes that haunt your nightmares than to see how some of the biggest names in the industry have messed up too?

So, to celebrate the 100th edition of Rich Snippets, we asked the smartest people we know to share their biggest SEO fails with us.

Hear from some SEO and SEM legends, keynote speakers, and seriously smart marketers who’ve…

  • Deindexed entire websites
  • Rank tracked local search queries in the South Pole
  • Sent confidential documents to the wrong client by mistake

And much more!

Some of these horror stories will make you laugh. Some will make you want to die from second-hand embarrassment. But they’ll all make you feel a bit better about your own SEO fails. (Keyword cannibalization or that 4xx error might now seem like the least of your worries.)

Read on to feel seen, folks…

Some of the world’s smartest SEOs’ biggest fails

“Once in 2015 I accidentally deindexed my entire site using the URL removal tool on a http:// variation.

By the time my boss called me back, I had resubmitted all the key pages to the index, run the $ impact, and literally run 5 miles.”

Jamie Indigo, technical SEO consultant and the author of our Rich Snippets newsletter

“My biggest SEO fail has got to be writing unclear documentation for a developer who was implementing our SEO recommendations for a client. Instead of setting sitewide self-referencing canonicals, he ended up setting a sitewide canonical pointing to the homepage, resulting in a ~80% loss in website traffic before the problem was spotted and fixed 🙈”

Nick Eubanks, TTT co-founder, From the Future co-founder, and Head of Digital Asset Acquisition at Semrush

(Want to learn more about essential SEO tools? Check out Nick’s Semrush review!)

“Hard to choose which is worse…

  1. Back in 2012, I had around a quarter of the SEO clients I was working with get demolished by the Google Penguin update and spent the next 6 months recovering all of them, or 
  2. My team at HubSpot accidentally deleted our CRM software landing page, which accounted for a LOT of our traffic and new signups (it ranked #1 for “free crm”). It took us over 24 hours to realize and get it back online (yikes).”

Matthew Howells-Barby, TTT co-founder and CMO of Decentral Games

“One fail that sticks out – I didn’t believe that comments helped. So, I rolled Site A into Site B without bringing over the comments on Site A’s articles. 

Weeks tick by, and traffic’s picking up but getting nowhere near Site A’s baseline. I dug into the GSC data, and the culprits are all these long-tail queries – phrases that aren’t even actually on the page. 

I check the archive.org snapshots of the Site A versions and sure enough – those long-tail phrases only existed in the comments 🤦”

Ian Howells, TTT co-founder and Head of SEO at The Grit Group

“I’ve never made a mistake, sorry. No fails, ever.”

John Mueller, Google Search Relations Team Lead

“Got a client to rank really high for the “Mayweather vs Pacquiao” fight back in 2015 (on a Saturday) without their permission. Thought they would be thrilled (high keyword difficulty and it was semi-relevant to their services)! They were not. The spike dwarfed their regular GA traffic and they fired us a few months later.

Annndddd my first Moz pull request deindexed the entire Moz Community Forum via the robots.txt file.”

Britney Muller, marketing and SEO consultant and Data Science 101 founder

“Once at Moz, a production deploy caused our blog – the biggest in the industry at the time – to canonicalize to our UGC content. It took us a week to discover and roll back the error, but the damage lasted for nearly a year as Google would not drop the canonicalization signals no matter what we tried. Ouch.”

– Cyrus Shepard, Founder of Zyppy and former TTT Live Q&A guest

“Here’s a little story about a big Canadian grocery chain…

The brand had undertaken a website migration – including a brand new online grocery store – with the development center I worked at. The marketing team signed a massive deal with an important media conglomerate. The deal involved a few hundred thousand dollars and thousands of recipes meant to bolster their online visibility and SEO.

Deal was done. Syndication was agreed upon. Migration was underway.

And then, the head of SEO for the media group put the contract in peril by forcing us to no-index those recipes because they were ranking higher than their own content. He used scare tactics claiming the deal was against Google guidelines.

I wasn’t about to let that slide, so I provided clear, official, documentation confirming syndication was acceptable and at your own risk. After a few back and forth exchanges handled by our project manager, he threw a fit and the grocery chain decided to play nice with the media group to preserve the relationship.

I took it hard, but the decision was out of my hands.

This is where the story takes an unusual but delightful turn. Six months later, I said “buh-bye” to that drama and quit my job. And guess what? The head of SEO took over my job. He didn’t think to ask who the SEO was for that account and who was in charge of the website migration at any point, not even while interviewing for my old job.

Karma is a beautiful thing. He was now in charge of this grocery chain and had thousands of “dead stock” content that was no-index and very little budget left to ensure the SEO success.

It gets better: the client’s team never forgot. The marketing director gave him a piece of her mind and let’s just say it wasn’t pretty. It was petty, it was glorious, it was on point. I wasn’t there to see him squirm but I know that it cemented his deep-seated dislike of me for years.”

Myriam Jessier, SEO consultant

(To learn more about Google’s content guidelines, you can check out Matthew’s post on YMYL.)

“Back in 2016 I worked as a developer for a small startup.

We built an interactive application that runs in the browser and involved a lot of cool gizmos: VR, 3D, interactive walkthroughs of real estate that also worked on your phone’s browser, server-side light pre-rendering, etc.

As some of you might expect at this point, SEO wasn’t off-the-shelf for such a site, but we did our best. Our server pre-rendered a fallback with an image and a textual description of the property that the client-side application would then render in brilliant, interactive 3D. The 3D itself was progressive, so we would show as much as possible as quickly as possible and then progressively enhance. 

We were proud that we’d thought of all that and had dotted all our i’s and crossed all our t’s. But there was one thing on the list that we didn’t tick the box for: Offline support.

Wouldn’t it be amazing if the last couple of properties you visited in your browser would also be available for another walkthrough when you’re offline? Yes, yes it would! So I went to work on that.

The way to do this is to leverage a service worker. A service worker is a piece of JavaScript code that gets triggered, whenever a website tries to load something over the network. 

Imagine the service worker like a guard in a medieval village. A villager asks the guard: “May I cross the gate to fetch some potatoes from the field?” followed by the guard deciding how to deal with that request. The guard may say: “You may go outside the walls and harvest your potatoes”. Or they might say: “There are hostile knights out there, take some potatoes out of your pantry instead”. The service worker equivalent might be to check if there is a local copy of the requested resource or if it needs to be fetched from the network.

This would have been easy enough to build if it wasn’t for our API. The API worked only with POST requests and such requests, by their very nature, cannot be cached. That means: POST requests don’t go through the service worker and cannot be controled for offline usage.

Now my developer curiosity led to a moment of developer creativity: What if I rewrite the frontend JavaScript to do GET requests to the API instead? I cannot change the server to handle GET requests, but I can absolutely use the service worker to take the GET requests, check if they have a local copy in the cache and if necessary, do a POST request to the API server, cache the response, and hand it back to the frontend JavaScript code. A few lines of code, a bit of testing and a demo later it was greenlit for release.

And so they lived happily ever after with their newly-released offline support and happy customer feedback…

Except we noticed that our intake of new users tanked. And those who still came to the platform were coming from ads or direct, but no longer via organic search.

What happened?

Well, service workers are one of the features that Google and other search engines don’t support. So, instead of the service worker taking the GET requests and transforming them into POST requests before sending them to the API server, search engine bots sent GET requests to the API server… which weren’t supported and were answered with HTTP 405 responses. 

So none of the content was available to search engines anymore and pages were falling out of the index.

Once I realized what happened thanks to our server logs, I rolled back the change and instead rewrote large parts of the frontend JavaScript to use POST requests and do their own caching without a service worker. Things came back to normal, but it took weeks and was a very painful reminder that technical SEO and testing should not be taken lightly.”

– Martin Splitt, Developer Relations at Google

“I once pushed a json file to production that had a missing curly bracket and broke the entire site.

At 4:30 pm.

On a friday.”

JP Sherman, Principal Product Owner at RedHat

“I do techie stuff for SEO – automation, etc. A while back, we put together a program to rank track a series of lat/long points within a region – say, every square mile within a city limit. So, for every city block, you’d request a cluster of keywords. My job was to get these lat/long points from within the city limits, send them off to get rank tracked, and then visualize the results.

We had a hefty request for a large southern US city. I generated the lat/long pairs (a lot of them) and sent the results off to get rank tracked. The resulting table was like 54,000 rows, I think? 

Anyway, I got the lat/long pair screwed up. We rank-tracked all those (local intent) queries in Antarctica for a large portion of square miles around the South Pole. If anyone needs information about the search habits of penguins or polar bears, let me know…”

Jess Peck, Machine Learning Engineer at Local SEO Guide and author of our SEO’s guide to Chrome DevTools course

“Many years ago I was interviewing for my second in-house SEO specialist job. I tried to cover my back by asking about available resources and flexibility from a technical and content standpoint since in my previous job, I had failed to do all that I had planned to due to this. 

But this one seemed like the dream job, as they promised I would have a couple of developers focused on SEO only with flexibility to execute technical changes. I was going to have access to editorial resources too.

The pay was good, so I ended up accepting the job offer. What I couldn’t foresee is that, despite all the technical and content resources at my disposal, there were so many constraints around what I could actually implement to keep the site competitive due to the company’s business model. 

After too many presentations with benchmarks, analysis and forecasting – as well as pilot projects requests – the company wouldn’t change the way content was published (or not published) due to how their business model worked. And they were unwilling to evolve it even if a direct competitor was already doing it while outgrowing them in revenue and organic search traffic. 

In the end, I had to give up and move on to a new job. Sadly, this was my second in-house position in which I wasn’t able to make the impact I had planned due to constraints (this time, business related). I consider this my biggest SEO fail. I should have known better at this point.

Coincidentally, just a few weeks after I had resigned to move to another job, they had to let go of a very high share of their employees due to their decreasing revenue, which could be connected back to their business model. After just a couple of years the business ended up closing due to this. 

I got a big lesson from this experience: SEO is not only about aligning technical and content resources. It requires product and business alignment to work well too. 

After that experience, when doing discovery calls with potential clients and validations from a technical and content resources standpoint, I always verify there’s an alignment with how the business works and what it needs to do to maximize the chances of SEO success.”

Aleyda Solis, international SEO consultant, speaker, and author

“I’ve got the always-hilarious noindexed website, with the added fail: I told the potential client about it in our pitch meeting. Literally talked myself out of a job.

Also: I had a client who could only redeploy their entire site. If you edited a title tag, they would do a complete code deployment. They’re probably still out there somewhere, doing the anti-devops.

OK, and my favorite: I once accidentally added a zero to the number of Chrome instances in my crawler. I crashed the client’s website. They were… unamused.”

Ian Lurie, digital marketing consultant and TTT contributor

“I can’t think of anything 🤦🏼‍♀️ That’s probably the fail. Maybe they were so bad I wiped them from memory…”

– Lizzi Sassman, Senior Technical Writer at Google

“Before going inhouse, I worked at several agencies. One of the clients was under a lot of pressure. SEO traffic was declining, we hadn’t figured out why, and revenue was in the red. As you can imagine, the collaboration was very tense.

Before heading out for a long weekend, I sent a lengthy email to the client with a few deliverables. I made the mistake of CCing someone from another company with the same first name as one of my contacts. I immediately got a pretty angry email back and a call from my boss. 😅

Now I can laugh about it. But back then, I probably lost my body weight in sweat. Not a great way to start a long weekend!

All this goes to say that mistakes happen and you survive.”

Kevin Indig, strategic growth advisor and former TTT Community Advocate

“I warned a client that something that they were going to do was too spammy, and would risk a manual action. The rest of the team thought it would be ok, so I eventually backed down. It launched, and I was right – they got a manual action that was hard to remove, and ended up costing them millions. I should have fought harder.”

Cindy Krum, Founder of MobileMoxie and former TTT Live speaker

“My biggest ever SEO fail was giving an enthusiastic client too much encouragement during a site migration. This meant they thought it was fine to take matters into their own hands and imported over a hundred URLs from the help documents into the main blog. Due to the timing of other implementations, we couldn’t do a rollback, so had to manually unpick all the duplicate but slightly differently labeled content… it took ages!”

Crystal Carter, Head of SEO Communications at Wix

“One of my biggest SEO fails wasn’t related to the website, but sending a full set of documents reporting site performance – including revenue – to people at a completely different organization. That was obviously a very fun conversation to have with both clients…”

Nick LeRoy, freelance SEO consultant and author of the #SEOForLunch newsletter

“In the days when Google’s ‘Mobile-First Indexing’ was first becoming a thing, we were managing a site migration. Everything looked fine technically, but a few weeks later the GSC errors started rolling in. Some pages were marked noindex, but they certainly didn’t appear that way! 

Turns out, a few had a noindex directive on them for mobile user-agents only due to a wonky plugin. Our QA at the time defaulted to desktop user-agents. So… yeah – don’t forget to review your QA list after major announcements!”

Tory Gray, CEO and Founder of The Gray Dot Company and former TTT Community Advocate and TTT Live Q&A guest

“While trying to remove old content from a client’s site via FTP, I accidentally deleted the root folder (the entire website…). Thankfully, I had taken a backup right before, so it only took about four hours to restore.”

Sam Torres, Chief Digital Officer at The Gray Dot Company, former TTT Community Advocate and TTT Live Q&A guest, and author of our Javascript SEO 101 course 

When I first started with SEO, it was across a wide range of different sites. One of my businesses was an events company and I wanted to rank in Google for a wide range of different keywords associated with local events. I went on Fiverr and bought a bunch of links and gave access to a Fiverr user to write content on my site. Very quickly my site started to get a lot of links from adult-rated websites and was filled with ads for adult-driven websites. 

The biggest lesson that I received from this project was if you pay pennies for work, you will get pennies out of that work. In fact, you might actually lose a lot more money in the long run because you tried to find a shortcut to success. That was my biggest SEO fail.”

Ross Simmonds, CEO of Foundation and TTT contributor

(Don’t forget to check out Ian’s post on backlink analysis, so you can quickly identify problematic issues.)

“Back in the day, I had the opportunity to build links for this cool scientific expedition experience to the Titanic. But get this – the ship was slowly getting chomped up by bacteria! I thought that was a wild story and tried to pitch it to a bunch of journalists. But my emails were super generic and I only got one response and zero links. Bummer…

Anyway, a few months later, this diving team decided to check out the Titanic and see how much it had decayed. Turns out, the media went gaga for it and the story was all over the place – NBC, BBC, you name it. But, unfortunately, I still wasn’t able to salvage any links for my client. Ugh.

So great story, sucky link-building strategy.”

– Bibi Raven, freelance linkbuilder

“I once had a whole informational content plan for an e-com site set up to give Google a bit more “semantic meat” to chew on. I was not very explicit in my brief as I thought I could get away with saving myself some time. Big mistake. The site basically copied and pasted content together from around the web instead of writing something original (although in the age of AI writers is this really a problem? I kid…).”

Mordy Oberstein, Head of SEO Branding at Wix  

“Back in the early days I performed a huge link buy. I made an error on the spreadsheet anchor test field and accidentally put in “TBD” for all of them, then never changed them. Only found out after the links went live. The client was not a happy camper.”

Dave Ojeda, Schema markup auditor

“A dev team asked me what robots tag should be on pages and I told them “none”, meaning don’t put one on. They literally added <meta name=”robots” content=”none”> . Which is the same a noindex, nofollow. Ooops.”

Dave Smart, technical SEO consultant and TTT Academy contributor 

“My biggest SEO fail, ever; 12 years ago I took on a project to optimize a credit card processing website. The entire website was built on Flash… After 2 weeks of hitting my head against the wall, I refunded the client.” 

Blake Denman, President and Founder of RicketyRoo and TTT Community Advocate

“A client redeveloped their entire website and only included the SEO team (us) in two meetings.

The new site launched with most of over a decade of content and optimization gone in favor of a more “dynamic” product category structure and a homepage (domain.com) that redirects to .com/en/.

The primary keyword that we got all the way up to #2, right behind Amazon, dropped to the ninth circle of hell. It’s currently sitting in an average position of 46.9. Many many others have never been seen again.

Listening to your SEO team is a superpower”

James Svoboda, Partner at WebRanking 

“My biggest SEO fail was building our tiny (four-person) company blog to 1M monthly pageviews with tons of irrelevant skyscraper content, auto-generating PDF versions of every article, collecting thousands of emails and absolutely drowning our solo sales guy in terrible, terrible leads without a hope of ever converting.

Learned the hard way that high traffic does not a business make.”

Ryan Law, VP of Content at Animalz

Want to learn practical ways to make money from your website? Check out our article on types of blogs that make money.

“I was recommended to help a large international e-commerce group to look after SEO across their major brands. They had partnered with a dev agency to manage a site migration, and this agency had fixed ideas about the way in which they wanted to build and migrate the site to a new platform. 

From analyzing the staging site, I immediately found out that the dev agency had built the site in React where 95% of the entire site was client-side rendered. That meant that it would take search engine crawlers nine times longer to crawl, parse, index and rank their pages than their HTML competitors. 

I knew immediately if they didn’t find a solution to server-side at least the critical parts of the site, the client would nuke their traffic and organic revenue, which was their second largest channel. 

The dev agency ignored my warnings and fix recommendations and even became quite aggressive as I persisted in getting them to see reason knowing the risk to the client. Their argument was that they’d have to redo the build entirely, and my solution wasn’t practical to their bottom line. 

Unfortunately, the migration went ahead as the client was backed into a corner with their legacy platform crumbling because of spam breaches they couldn’t patch. They tried to give the team as much time as they could, throttling the site to avoid serious security vulnerabilities. But this also meant that real customers would get booted out of the site for periods of 30 minutes to up to nine hours where they couldn’t purchase.

With the site launched, by the time the dust had cleared the client lost $12M in sales, diminished to 40% of its original size in traffic, and with conservative projections had no recourse to recover as they couldn’t get their new pages to index. They decided in the end it was best to rebuild and migrate the site again, losing further millions. Total loss is unknown, but safe to say, that’s a failure that really hurt. 

I often think back to that time and wonder what more I could have done. That being said, they’ve learned from that SEO fail and we’re back scoping a new migration with a new team and a new attitude towards the value of search for their brand. Initial tests have blown expectations out of the water and it’s only growing from there.

Site migrations are one of the riskiest things you can do as an SEO professional. Unfortunately stories like these are common and I’ve managed migrations with circumstances that’ve been stacked against success. A failed company and a failed migration is death by a thousand papercuts. The hardest part of the job often isn’t finding the problems – it’s convincing those whose jobs it is to fix them that they’re important to get resolved.”

Nik Ranger, Senior Technical SEO Specialist at Dejan Marketing

“I’d been experimenting with my own website, trying to trigger a Knowledge Panel. During a podcast with Jason Barnard I learned that my efforts were paying off and I was part of Google’s knowledge graph. I even had a kgmid! 

So, I decided to double down and enrich my person Schema more. WELL, in the copy-pasting process of different bits and bobs of code, I copied <script type=”application/ld+json”> twice, which made my Schema invalid. It stayed broken for since months before I figured it out, and now I’m no longer in Google’s Knowledge Graph.”

Lidia Infante, Senior SEO Manager at Sanity 

“I’d say a common fail is sending myself into panic for a short period over bad data. This happens when I have a filter applied to a tool – whether its Screaming Frog, GSC, or GA – and go to look something else up, forgetting about the filter that hasn’t cleared. Then I’m looking at incredibly skewed results, if any, and freak out! Thankfully that doesn’t have any external effects… well, until I actually share the data without realizing it, which has happened a few times too.”

Danielle Rohe, Sr. Data Program Manager, Content & SEO at Cox Automotive

“We were adding a lot of tech fixes to a site for a large rollout that included every wing of the team, and I had put in a ticket to make sure the site canonicals to https://website.com (including the code). Then the dev pushed that exact code to the entire website, thereby canonicalizing all of the pages to the homepage for about a week.

Plus, it was for a website that had slow rollouts and this was during a very busy time, so I spent all my time trying to negotiate with all the people on the project to push a fix.

I finally negotiated with the PMs to move things around so we could add it correctly, but still had a lot of cleaning up to do.”

Jess Joyce, SEO consultant and former TTT Live Q&A guest

“This wasn’t directly my fault, but I had to deal with the aftermath of it, and it definitely taught me to call things out if I thought the process wasn’t ideal: 

I was working at an agency where all of the Google Business Profile and Google Ad Accounts we had management access to were under the same Gmail account. Somehow Google created duplicate GBP listings for a bunch of the AdWords accounts. 

This caused ALL the GBP listings in the account to get suspended. That resulted in a LOT of explaining to clients and outreach to a few friends who were GBP product experts to get things cleaned up. NOT fun!”

Niki Mosier, Sr. Director of Organic Growth & Program Operations at AgentSync and former TTT Community Advocate

“This is many years ago (unreliable narrator), but we were moving a site to a dynamic rendering solution.

Passed all the QA crawls, everything looked great, all the pages rendering correctly.

Unfortunately, it no longer passed when we went live and it went under actual ‘load’ from search engine’s spiders.

One of the services providing content to build the pages would fall down when Googlebot requested enough uncached pages fast enough. Despite the failure, the prerender solution would ‘gracefully’ still build, serve and cache an essentially empty page with a 200.

Google starts canonicalising every PLP to one PLP.

As the cache populates, Googlebot increases crawl speed, accelerating the death spiral.

I crawl less politely during testing now.”

Oliver H.G. Mason, technical SEO consultant

“Before I worked at Sterling Sky, I did three hours of link building for a client using their competitor’s URL! 

Back when link building was just dumping links on any directory that would take them (no prior backlink audit needed), we had a client with a .net domain. Their competitor had the .com and guess which one I used? 

I will NEVER forget this! It still hurts my stomach. Not only did I NOT do the client’s marketing, but we gave the competitor a leg up in a very competitive market! Of course, we fixed it in the end, but I was still so embarrassed!”

Erin Jones, Local SEO Analyst and Account Manager at Sterling Sky

“There was client who removed all their text content and website hierarchy (hiding it behind JavaScript) because they’d been sold this awesome new brilliant-for-conversions website that was going to make them more millions than ever.

I warned them organic traffic would bomb and they should test it UX-wise, because it didn’t look great for conversions either. But they’d spent too much (not involving SEOs soon enough) so launched the site.

Traffic and revenue absolutely bombed, but I think it was around a year before they reverted back.”

Alex Harford, Technical SEO Consultant at Salience

Find Keyword Ideas in Seconds Boost SEO results with powerful keyword research.