History is repeating itself: The return of Google penalties in the age of AI
Over the past year or so, I’ve seen a significant increase in the number of businesses affected by Google algorithmic and manual actions.
In most cases, these actions are the result of activities carried out to try to increase performance – either directly in search results, or to indirectly influence AI and generative LLMs. In this article, I’ll attempt to explain why we’re seeing this surge, and to highlight that it is really just history repeating itself.
Google has waged a long-running war on spam. What we’re experiencing now follows a relatively calm period – much calmer than the old days of named core algorithm changes such as Panda, which focused on content quality, and Penguin, which targeted manipulation of link graphs. Those individually named algorithms are long gone, replaced by broader core updates that bundle many things together: improvements to the overall algorithm, but also protection of it – weeding out the things Google doesn’t want there. Manual actions often follow these core updates, because that’s when Google has run the data sets and found that changing the algorithm makes sense alongside applying a manual action where something is severe enough.
Those actions can operate at a page-by-page level, or they can be site-wide for the most manipulative of activities. The key driver of the current surge is, predominantly, the growth of AI-generated content – and specifically, the production of that content at scale.
I’ve been working in Google algorithmic and manual action analysis for many years, going back to the Panda and Penguin era – a period that resulted in actions against hundreds of brands and domains of all sizes, from major international businesses down to smaller affiliates. So, while AI might be new, the reason behind the enforcement cycle isn’t.
Understanding why Google acts
Before we talk about manual actions and spam specifically, it’s important to zoom out and understand why Google acts this way in the first place and why it has quality guidelines at all.
At its core, Google is not a public utility. It is a commercial platform, and its entire business is built on user retention and market share. Users generate demand, and that demand allows Google to sell advertising, where the vast majority of its revenue comes from. So it’s critically important that the environment where that advertising is sold, whether in traditional search or increasingly within AI, is one that users can trust. If users stop trusting the results (if the content isn’t credible, reliable, or correctly authored) they leave and go elsewhere. And if they do, there are fewer people to show adverts to, which means less advertising revenue. It goes full circle.
We’ve seen this play out clearly across platforms. A really good example is what happened with X following Elon Musk’s takeover of Twitter. When trust drops, people walk. And ultimately that means fewer searches, weaker advertising inventory, and less revenue.
So spam penalties and manual actions are not moral judgements. The number of people I’ve spoken to over the years who feel like Google is personally attacking them (often because they can see a competitor doing outrageous things and apparently getting away with it year after year) speaks to how widely this is misunderstood. Businesses make a lot of money from organic search, especially in a world where cost-per-click advertising is getting more expensive and more opaque. But the reality is that Google does not act simply because it wants to cause pain. It acts because manipulation exists, and when that manipulation reaches a scale that visibly degrades user trust, enforcement becomes necessary. Enforcement, like almost everything in life, lags behind the market. But it comes.
The technology changes, the pattern doesn’t
There are good parallels here with other areas – e-bikes, novel drugs – in any of these cases there’s typically a significant gap between when a technology or tactic is invented and when enforcement catches up. It only becomes urgent when it starts to scale up and becomes more of a threat. That threat threshold is what triggers action.
If you’d told people ten years ago that AI would be writing pages of content specifically to rank in search, they wouldn’t have believed you. But producing content in ways designed to game the system isn’t new at all. In the old days, you didn’t even need to produce unique content, so people would just duplicate the same content again and again, and it would rank and generate revenue from traffic. Then came content spinning with tools like Best Spinner, which created just enough variation to evade detection. Today’s technology allows far more sophisticated things to happen, but the pattern is the same. What’s changed is the cost and the scale.
Looking back at Google’s spam cycles: pre-Panda, we had content farms and thin affiliate websites. With Panda, it was low-quality and duplicate content at scale. With Penguin, it was link manipulation and network tactics. Post-Penguin, there was a lull (and far fewer mass penalties). People learned the lesson, changed their tactics, and adapted to Google’s quality signals. And today, that same cycle has accelerated again, significantly, with AI.
Google doesn’t kill these tactics immediately. They aren’t always the highest priority, and the cost of addressing them at scale is significant. But when it comes down, it comes down like a tonne of bricks. That can mean anything from your review schema being removed from search, to individual pages being de-indexed, to weighting being applied against specific phrases and keywords or, at the most severe end, your entire site being removed from the index altogether.
Why AI content is under scrutiny
Google has been clear that it does not penalise content simply because it was generated by AI. The issue is manipulation – content that adds no value, produced purely to game the algorithm. If you’re just generating content at scale without adding anything of substance, it costs Google money to crawl and index all of those pages. And if everyone is producing low-quality content, the whole internet starts to sink. These penalties exist to stop the gamification of the algorithm.
The failure patterns we’re seeing now include topical maps generated without real expertise, content produced to satisfy algorithms rather than humans, and internal linking structures built for search engines rather than users – modern equivalents of page rank sculpting. All of these things worked until they didn’t.
What’s particularly significant this time is that the cost of content production (which was always a meaningful barrier) has effectively collapsed. Quality content has always been expensive. Meaningful change without human intervention has always been expensive. AI changes that entirely, and where that gap opens up, the abuse grows. People are using AI to produce content right now without looking at the spikes in the data, and they’re exposing themselves to ever-greater risk. It’s also worth remembering that scale doesn’t just amplify the risk, it amplifies the identifying data points. The more you produce, the easier it becomes to detect the pattern.
There’s also a broader dimension here beyond traditional search. LLMs often take inspiration not just from historical training data but from current, live web content to retrieve up-to-date information. The quality and credibility of your web presence matters not only for how you rank in Google, but increasingly for how you appear (or don’t appear) within AI-generated responses.
Algorithmic actions vs. manual actions
Algorithmic actions have always been harder to diagnose. They’re often mislabelled as general core update volatility, when in reality something more specific is happening. Manual actions are more explicit. They follow algorithmic updates because that’s typically when Google has run its new data sets and identified cases where the pattern is severe enough to warrant human intervention and a formal action.
In the old days, you could go into Google Webmaster Tools and (particularly with Bing’s Webmaster Tools, which were remarkably transparent) speak to people directly and find out whether an action had been applied against you. That level of visibility is largely gone. But the actions themselves are very much alive, and we’re seeing a significant increase in them.
What businesses need to understand
In SEO, everyone wants the path of least resistance. Everyone wants to do things as cheaply and quickly as possible to see the maximum return. But doing those things always carries risk and the important thing is to be fully conscious of that risk before taking it. Short-term gains can often mean long-term losses. What goes up can come down.
There is a lot of danger right now in businesses using AI as the solution rather than as a tool. You still need to understand the context, know why you’re using it, and be clear-eyed about the risks. Manipulation will always be the trigger, and scale always magnifies both the risk and the data points that make it identifiable.
My mindset has always been about doing things the right way. Not trying to manipulate anything, but actually working to be the best and winning on that basis for long-term growth. Chasing algorithms that work today but not tomorrow is not a strategy. It’s a gamble.
Google is not going away. Whatever the rise of AI and LLMs means for how people discover information, Google remains enormously important – and it has decades of experience in dealing with spam and curating quality at scale. Beyond search, it controls Android, Maps, and a vast ecosystem of tools through which users are constantly generating data. Whoever owns the data is responsible for keeping it clean. Low-quality data isn’t worth much to anyone.
Closing thoughts
As we move further into 2026 and the rise of AI continues, it’s really important to focus on the “why”, not on gaming the system. C-suite and board members need to be aware of these dynamics and bake them into their SEO strategy. Google’s quality guidelines exist for a reason, and enforcement cycles exist for a reason. The businesses that understand this and build accordingly are the ones that achieve durable, long-term visibility in search.
This is not new. It’s history repeating itself. The AI is new. The reason it’s happening isn’t.
If you’re unsure whether your current SEO strategy is sustainable, now is the time to review it. Get in touch and book a strategic SEO audit and understand your risk before Google does.