Is Google Favoring AI Content?

Last week, Google announced a change to its search engine algorithm. Entitled the “Helpful Content System” update, Google says that the change aims to lower the ranking of content with low value to readers and put more emphasis on “people-first” content rather than content written to rank well in search engines.

However, according to many webmasters, that is not what the update has done. According to an article by Hugh Langley at Business Insider, many webmasters say that their human-created content is being sharply demoted, with “AI-generated crap” ranking well above it. 

According to one travel blogger, they saw 80% of their traffic disappear in just 48 hours and said that “very obviously AI written” content was now outranking them.

Inside the threads criticizing the changes, John Mueller, who works for Google as a search advocate, unintentionally highlighted the difficult situation Google is in. Though he made it clear that AI-generated content was fine if it is useful to readers, he also chided at least one upset webmaster for widely using ChatGPT.

This challenging position was highlighted in a recent post on X (formerly Twitter) by Barry Schwartz of SEO Roundtable. There, he showed that Google had removed the words “written by people” from its definition of “helpful content”, opening the door for AI-generated content to rank well if Google feels it is useful.

However, this shouldn’t come as a surprise to anyone. As we discussed back in May, Google’s position on AI-generated content has been softening. Where once it seemed to penalize generated content, they later said that no such penalty would be applied.

This was likely spurned, in part, by the introduction of their own Generative AI system, Bard, in February

However, even if Google did seek to penalize AI-generated content, there’s no reason to believe that they could actually detect it. As of right now, there is no reliable way to detect AI-generated content, and even Google’s algorithms likely can’t spot the difference reliably.

All this raises the question: Is Google favoring AI? The answer is that it probably doesn’t matter.

Google’s First Battle with Generated Content

While there’s not much doubt that Google and other search engines are being flooded with low-quality AI-generated content, this isn’t the first time that Google has faced this problem.

As we discussed in July, article spinning enabled spammers and content farms to generate large amounts of “original” content very quickly. 

This approach remained popular among spammers between about 2005 and 2011. However, in 2011, Google released its Panda/Farmer updates that effectively demoted spun websites as well as other low-quality content farms.

Though the technology never fully went away, its popularity drastically waned after those updates, with many of the services rebranding as “automatic paraphrasing tools” that targeted students hoping to dodge plagiarism detection tools.

While there are obvious similarities between spinning and generative AI, there are also a lot of differences. 

The biggest is that Google was largely able to detect spun content. By analyzing the text and looking at other on-site variables, Google was able to reliably identify this kind of spam content. With AI, there’s no such guarantee.

However, even if Google could detect such content, they probably wouldn’t want to penalize it. They’ve invested heavily in their own generative AI, Bard, and they aren’t likely to create search engine guidelines that disincentivize the use of it.

This means that AI content is going to make its way into Google, and the company isn’t likely to do much about it unless it begins to significantly impact their bottom line.

But, even if it did hurt their bottom line, it’s unclear what they can do about it. Their systems for determining content quality are, clearly, imperfect and there’s no way to reliably detect AI writing.

In short, Google has no incentive to penalize AI writing and no likely way to do it if they did.

Google’s Precarious Position

As pointed out by Langley in his article, this puts Google in a difficult position.

Most of their identity and revenue come from providing high-quality search results. If those results begin to wane in effectiveness, Google may start to lose market share to competitors.

Though, obviously, Google is in no immediate danger, the existential threat of AI-generated content reducing the quality of Google’s search results is very real. 

However, Google has to balance that real fear with their own investments in AI. To make matters worse, the fear of reduced search quality is a long-term threat, while the race to claim territory in the AI space is very much an immediate challenge.

Ethical questions aside, it makes perfect sense for Google to put the focus on encouraging the use of AI, specifically their AI. Their investment into Bard clearly indicates that they see a lot of promise in this space, though it’s promise they will never realize if they don’t at least put a strong showing in these early days.

But this puts Google in an awkward position when it comes to rewritten content. Google makes it clear that rewritten content that offers little to no original value is not considered “helpful content” and will not rank well.

However, all AI systems, including Bard, are trained on publicly available internet content. When writing, they are essentially rewriting material that already exists. In short, AI is doing the same thing that Google tells human authors not to do.

This leaves Google defending a policy that isn’t consistent with itself and seemingly ignores the realities of how AI content is generated. It doesn’t make much sense, but this is what happens when the search engine also owns a large AI system. It finds itself trying to serve two masters who, ultimately, need very different and often incompatible things. 

Bottom Line

Pretty much anyone who uses search engines regularly can testify that they there has been a sharp increase in the amount of clearly AI-generated content.

This has included content found in regular search results, news results and even image search results. 

On the latter point, Google recently made headlines for ranking an AI-generated rendition of “Tank Man”, the iconic photograph of a man blocking a line of tanks during the 1989 Tiananmen Square Massacre. 

In short, AI-generated content is definitely in Google’s search results and, for better or worse, it’s doing well and that means it will be taking traffic (and thus revenue) away from human authors.

In the end, it doesn’t really matter if Google is actively favoring AI content. AI content, by the very nature of being automatically generated, can beat human-generated content through sheer quantity. Just like the spinners from the late 2000s, the content doesn’t have to be better (or even good), just have enough of it that some of it ranks well.

As such, if Google doesn’t take a hard stand on AI-generated content, it may as well be favoring it. Even if Google thinks the playing field is level, the nature of AI means that humans are working from an extreme disadvantage.

However, not only is Google likely unable to penalize such content, it has a strong business incentive to not do so. Because of that, anyone who was holding out hope that Google would help stop the march of AI probably should start looking elsewhere. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free