Sports Illustrated: AI-Generated Articles, AI-Generated Authors

An article by Maggie Harrison at Futurism takes a look at the use of AI-generated content and AI-generated reporters at Sports Illustrated

According to Harrison, the publication received a tip from insiders at Sports Illustrated that the site was using a slew of fake journalists to serve as bylines for AI-generated content.

When they investigated, this was found to be entirely true, with a slew of clearly AI-generated articles featuring bios for individuals such as “Drew Ortiz” and “Sara Tanaka” that were clearly faked and featured headshots taken from an AI-model generation service.

To make matters worse, the quality of the content was extremely low. Including lines such as, “(Volleyball) can be a little tricky to get into, especially without an actual ball to practice with.” The bios themselves were also clearly AI-generated, giving generic backstories to the fake reporters.

How many such reporters were created is unclear. Though Harrison said their attempt to reach out to the publication for comment was met with silence, shortly after the inquiry, the profiles were removed.

The issue wasn’t limited to just Sports Illustrated. The publication is owned by The Arena Group, which also owns The Street, among other sites. That site also featured similarly AI-generated articles with fake bios that were created by AI.

The Arena Group also owns publications such as Men’s Journal, Parade and The Hockey News. 

This also isn’t the first time Futurism has called out large publications for the misuse of AI. They previously called out CNet, which I covered here, Gizmodo, A.V. Club, Buzzfeed and more.

However, of all those, the Sports Illustrated story is easily the most egregious. Not only did the site fail to disclose the use of AI reporting, but they deliberately tried to hide the use of AI by creating fake reporters to give bylines to.

It’s a new low for AI in journalism, and one that represents a major fall from grace for the once-respected publication.

AI in Journalism: The Good, the Bad and the Ugly

Earlier this month, we took a look at the divide in journalism when it comes to AI. When looking at AI reporting, at least at the companies that have stated their policies, there are two camps: Those that are banning AI outright and those that are severely limiting the use.

Those in the latter camp are typically restricting the use of AI by saying that it has to be held to the same standards as human reporting. This includes editing, fact checking and other quality controls. 

However, that divide only looks at publications that value journalistic integrity. So, while legitimate publications may be split into two camps, there’s a third camp that’s made it clear it does not care about the ethics of journalism and will both openly flout those rules and harm their quality to use AI today.

It would be easy to write off those in that camp as simply being spammers, people using AI to create junk content for quick clicks. However, as we’ve seen, some of the biggest, and formerly most trusted, names in journalism have chosen this path. 

That camp, as we’ve seen, includes Sports Illustrated, The Street, CNet, Gizmodo and more.

What these names have in common is that they are owned by large media companies that push the use of AI upon them. Gizmodo and A.V. club are both owned by G/O Media, CNet and Bankrate are both owned by Red Ventures Brands and Sports Illustrated and The Street are both owned by The Arena Group.

Pretty much every time a previously respected publication has been caught using AI unethically, there’s been a “media company” or “brand company” behind them, pressuring them to do so.

That’s not to say that all owners of news sites and organizations are bad. However, it’s clear that a small group of them have decided that using AI instead of human reporters is worth setting aside journalistic integrity, concerns over quality and everything that the brands they own once stood for.

That is the ugly truth of AI.

Using AI with Integrity

What makes all of these stories particular bad isn’t just that the sites involved used AI. It is that they used AI poorly, and they used it unethically.

The first mistake, in all cases that were highlighted, the use of AI resulted in poor quality work. This includes factual errors, plagiarism, low-quality writing and, in at least one case, an inability to count to five.

AI, at the very least, has to be held to the same standard of writing as any human reporter. This includes fact checking, plagiarism checking, editing and proofreading. The problem is that, since AI is lower-quality content and makes more errors, it will require extra resources on the editing side to make it even passable.

That, however, is precisely what these companies want to avoid, spending resources. So it’s no shock that they generate AI content and put it up with little, if any, changes.

But, even though the use of AI is often obvious to human readers, such institutions also rarely disclose the use of AI in the reporting. This includes attributing such writing to “staff reporters”, as with CNet, or generating fake reporters, as with Sports Illustrated.

Either is a grotesque violation of journalism ethics and standards. In either case, the publication is deliberately misrepresenting how the article was created to the reader, making it appear to be human-written when it is not.

However, the Sports Illustrated approach takes things an additional step, using fake images and fake bios to make it appear even more human. It’s not just one direct lie to the reader, but several.

If companies are going to use AI. They should be clearly, plainly and obviously disclosing it. Would that cause many readers to dismiss the work out of hand? Yes. But that is the reader’s decision to make, and tricking them into believing that content was written by a human denies them the ability to make that choice by lying to them.

However, it’s become apparent that, for several companies, that’s a perfectly acceptable sacrifice to make. That makes it a shame that they happen to own many publications that were once widely respected and trusted.

Bottom Line

Realistically, there’s likely no ethical way to use AI in journalism. Especially not right now, with the current state of AI writing. However, the companies and publications that are going to do it anyway owe it to their readers to at least disclose the use of AI and work to hold AI writing to the same standards as human authors.

But doing so defeats the purpose of using AI in the first place. AI is meant to help generate content on the cheap, content that encourages clicks and shares. Disclosing AI and rigorously editing/checking the output of AI systems makes it no easier or more appealing than just using human reporters.

For AI to be used profitably in a journalism environment, it more or less has to be used unethically.

In the end, journalism is supposed to be about a search for truth. The rigorous reporting, fact checking and editing that journalists provide is supposed to get us closer to the truth.

That is completely undermined when once-trusted publications not only use AI systems, but then directly and deliberately lie to readers about doing so.

There can be no truth in journalism when there’s no truth in how the content was created. Authorship is the foundation of accurate and ethical journalism. 

Without it, everything else falls apart. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free