The Divide in Journalism Over AI

Yesterday, I took a look at the current, tragic state of Google News. The once-invaluable tool for finding reliable sources of information has become overrun with spam and other low-quality sources, with generative AI escalating the issue to a new level.

However, even among mainstream news organizations, there is a growing divide over the use (and misuse) of generative AI systems. News organizations, often struggling under difficult financial pressures, are trying to decide what, if any, role AI should play in their work.

To that end, news organizations have come up with a wide variety of approaches to AI, with some taking hard line stances that broadly ban the use of AI to companies that are working to integrate AI into their publishing process as well as striking licensing deals with AI companies.

According to an article in the Press Gazette by Bron Maher, The Telegraph is an example of a company in the former camp, with the company barring the use of AI for reporters, saying that any who use it will be treated as if they had plagiarized. 

Citing issues with reader trust and the inaccuracy of AI reporting, the company is both barring the use of AI in reporting and discouraging their reporters from putting Telegraph content into AI systems. The company did leave open the possibility of using AI for non-public facing work.  

On the other side are companies like CNet and Gizmodo, both of whom have dabbled with AI-generated articles, often with disastrous results

So, as we approach the one-year anniversary of the public launch of ChatGPT, it’s worth taking a moment to see how institutions of journalism are approaching AI, at least at the moment.

The Three Battlefronts

For news publications, the AI question isn’t one singular question, there are actually three separate issues that news organizations are weighing.

  1. AI Reporting: The most obvious issue is whether AI should be used in producing reporting for readers or viewers. If so, how should that integration work? How should AI works be edited? What limitations should be placed on the use of AI in works for the public?
  2. AI Licensing: News organizations have large libraries of human-created content that AI companies would love to use to train their systems. Though the legal issues of training AI on copyright-protected works without permission are being settled in courts, several news organizations are actively licensing their content while others are attempting to deny access to AI systems.
  3. Other Uses: Finally, there are other uses of AI, some public facing and some not, that news organizations have to consider. This includes everything from AI summaries, AI-powered chatbots and use of AI systems in back office roles.

With that in mind, here’s a brief look at some of the ways news organizations are (and are not) using AI in those three spaces.

Note: Big thanks to another Press Gazette article, this one by Charlotte Tobitt, whose list of policies was a major help in writing this article. 

AI Reporting 

AI reporting is easily the most controversial and divisive issue in front of news organizations. 

To be clear, AI reporting been going on for several years. In 2017, the Washington Post acknowledged the use of Heliograph, an AI reporter that it used to cover hundreds of basic stories that were formulaic and, generally, not worth having a human cover. 

However, the public launch of ChatGPT in November 2022 framed the question in a new light.  News organizations, large and small, are slowly reaching decisions about AI and publishing statements on the issue.

For example, the Associated Press bars the use of AI to create publishable content and treats AI as an “unvetted source”. However, Reuters takes a more vague approach, not barring the use of AI but saying that any use must comply with their accountability measures and prioritize fairness, security and privacy.

The Guardian, much like The Telegraph, also bars the use of AI saying “there is no room for unreliability in our journalism”.  The BBC, on the other hand, doesn’t draw a hard line and says that its use of AI must prioritize talent and creativity and be in the best interest of the public while maintaining transparency. 

In tech circles, Wired bucks what Gizmodo and CNet have done in the past and makes it clear they don’t publish stories written or edited by AI

In general, most publications are not as bullish on AI as Gizmodo and CNet are (or at least were). The main question is whether AI reporting is being banned outright, or whether they’re keeping the door open to some AI use, with heavy human oversight.

AI Licensing

The divide over AI licensing is much more lopsided. Though some larger firms such as the Associated Press and Getty Images have either struck deals to license their content to AI companies or launched their own. It’s far more common for news organizations to block access to ChatGPT and other AI companies.

As of August, dozens of major media companies had blocked access to ChatGPT including The New York Times, CNN, the Washington Post, ABC News and more. 

The consensus at the time appears to be that, if AI companies want to use news media content for training, that they should pay for it. However, those are the issues before the courts right now in various cases.

Depending on the outcomes of those cases, the next questions could be “Which companies are licensing their work and to whom?” or “Which companies are taking technological measures to block AI bots?”

Other Uses

As The Telegraph’s announcement pointed out, AI isn’t just an issue of generating reporting or reporting being used to train AI systems. There are other potential uses of AI that must be considered.

Many of those are not public facing and could involve automated processes in the back office, AI generating corporate/investor information or streamlining workflows for producing content without generating any content.

However, some of the uses are public facing, even if they aren’t directly related to reporting. For example, Reach has launched an AI assistant that consolidates and summarizes news across its 80 different brands. Likewise, Tom’s Hardware launched an AI chatbot to help answer questions about articles. 

The biggest example is likely Reuters using AI to automate transcription and the identification of public figures in its video content, making it easier for clients to find what they are seeking.

Though news organizations are generally reluctant to use AI in their reporting, they are seemingly more willing to explore using AI to help users parse the information they put out, either through content recommendations, summarization or automated tagging of work.

Bottom Line

Though AI is likely to impact virtually every industry, few are as complicated or as high stakes as journalism. 

For the most part, news organizations seem to understand this and, while their approaches do differ, they’re all being carefully measured and considered. Outside a few reckless (and mostly disastrous) runs with AI content, the news world seems to be treating the issue with gravity.

That said, news organizations are definitely in a challenging positioning when it comes to AI. 

On one hand, AI represents an existential threat to human-driven journalism. As we’ve seen with Google News, AI-generated content is capable of driving out human-written content through sheer quantity. The concerns of ethics and accuracy that are keeping news organizations away from it won’t and haven’t stopped the spammers.

On the other hand, it could be a solution to those financial challenges, allowing them to produce more content at a lower cost. In that regard, it’s very similar to what the Washington Post announced in 2017, which, at the time, wasn’t the source of major controversy.

In the end, AI is a divisive and explosive issue right now. Journalism, which depends heavily on public trust, needs to approach this topic carefully. 

But, while that seems to be largely the case, it’s also clear that battle lines are already being drawn and that we may be headed to something of a civil war over the use of AI in reporting. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free