The London School of Economics (LSE) recently published a survey of AI’s use by 105 news and media organisations across 46 countries.

The headlines

75% of news organisations use AI in the first step in the news cycle, the gathering of ‘raw’ news. AI is most commonly used by journalists as a tool to speed up the process: such as AI-powered tools for speech-to-text transcription and automated translation like Colibri.ai, SpeechText.ai, Otter.ai and Whisper.

A more interesting development is uncovering stories through use of web crawlers to identify trending topics, detect news of interest, and gather data from various sources: one respondent said:

[w]e use speech-to-text algorithms to monitor public discourse, mainly on the bigger broadcasters from the country (radio, TV, streaming). We also monitor viral social media posts to identify possible disinformation circulating on these platforms.

90% of news organisations use AI in the next step in the news cycle, news production, mainly as a drafting tool. However, an emerging trend is to use AI as a fact checker, including on-the-spot in interviews.

80% of news organisations use AI in news distribution. While somewhat lower usage of AI than in the earlier two stages of the news cycle, the range of use cases was the widest:

  • speech-to-text and translation AI are used to multiply content.

  • AI-driven search engine optimisation tools can help newsrooms boost discoverability of stories.

  • AI-powered social media distribution tools like Echobox and SocialFlow are used to optimise social media content scheduling.

  • personalisation and recommendation systems are used to match content more accurately and at scale with interested audiences. As one respondent said:

We have a multi-layered set of rules for customising our content to individual news outlets, so it meets all of their internal rules for word-use from British or American spelling, to rules regarding biassed words, opinionated words, cliches, hyphenated words, and so on.

The LSE survey more telling described this use of AI as “ tailoring content to a specific medium or audience”, which goes to the editorial risks discussed below.Some of the more innovative uses of AI reported in the LSE survey were:

  • ‘robot interviewer’:

If there is a specific issue that affects lots of people, a chatbot could perform rudimentary interviews to get a general feel for what people are saying and out of those basic interviews, the more interesting cases could be followed up with an interview by a journalist.

  • ‘robot newsreader’:

We have created a presenter and his programme 100% with generative artificial intelligence, the image, what he looks like, what he says, the voice... everything is AI, but supervised.

Since its 2019 survey, the LSE has “observed a broad increase in preparedness” of news organisations for AI, but institutionally they still seem to be behind the curve of actual use of AI in their news cycles:

  • around 53% of respondents said they were not ready yet or only partially ready to deal with the challenges of AI integration in the newsroom.

  • around 40% of organisations said their approach to AI technologies in the newsroom has not changed much since LSE's 2019 survey, which as the LSE report notes, misses the fresh challenges which generative AI presents.

  • only around 1/3 of the respondents said their organisation had an AI strategy or were currently developing one, similar to the results in LSE’s 2019 survey. Some respondents, perhaps reflecting a self-image of journalists as rugged individuals, thought the lack of an organisational AI strategy was a positive: “[o]ur organisation does not have a formal strategy for AI-related activities. We rely on the initiative and enthusiasm of some of our colleagues who are interested in AI."

  • while 85% of respondents have at least experimented with using generative AI such as ChatGPT, respondents were divided on how different managing generative AI would be compared to earlier AI: 40% did view generative AI as presenting new challenges in the newsroom, but 52% were unsure..

Is there a digital dividend in newsrooms?

More than half of respondents cited increasing efficiency and enhancing productivity as core objectives driving their adoption of AI.

But, possibly reflecting the early stages of AI adoption, “the successful use of AI technologies varied widely among respondents”. Around 25% said the impact of AI adoption on their workflows and processes in the newsroom has been significant, with some reporting time savings for journalists of 80% in the more hum drum tasks. But the other 75% said that they have not witnessed a noticeable impact yet, but expect to in the future.

As in other sectors of the economy, there is concern that AI could eliminate human journalists’ jobs. However, 60% of respondents said they had not yet seen any impact on existing newsroom roles.

Interestingly, respondents were more focused on the need to create new jobs to provide technical support for AI in the newsroom. The lack of that technical support was identified as the single biggest barrier, by 41% of respondents, holding back adoption of AI in the news cycle.

There was also a widely expressed view that if AI is to be effectively utilised in the news cycle, the AI technical expertise needs to be directly embedded in newsrooms, and not in traditional IT departments. As one respondent said:

We convinced our IT department that while prompt engineering does require a certain technological understanding, IT staff are not equipped to assess the result when it comes to journalistic production. On the other hand, designing a successful prompt, getting the machine to tell what I want it to tell, has some similarities with journalistic processes. We are already training a journalist in prompt design.

90% of respondents also thought that AI literacy is now a core competency for all journalists - with training needed to upskill current journalists and as an employment criteria for future journalists.

Editorial impact

Unsurprisingly, more than 60% of respondents expressed concerns about the ethical implications of AI integration for editorial quality and other aspects of journalism.

A common concern was that since AI systems mirror societal biases, reliance on AI technologies could exacerbate biased news coverage and misrepresentation of marginalised groups, which a human journalist should be expected to ‘balance out’.

But LSE reports that when asked about how they were managing bias, “[f]ew organisations provided solid examples”, which is concerning given the extent to which AI is already being used in the news cycle (and the lack of overarching AI policies which would be expected to address bias mitigation). Two typical responses from respondents were:

Although I understand the concept of de-biasing, I don’t even know the steps of doing so or even how to implement such a strategy.

I can’t say we’ve done that yet but debias training is being talked about. That is the aspect of AI that we’ve found is the most time consuming so I do worry that it might not be prioritised.

However, there was a widely held view that ‘humans in the loop’ when using AI in newsrooms was crucial - not only to mitigate potential harms like bias and inaccuracy by AI systems, but also because AI could not (currently) match the core skill of a journalist, understanding and conveying ‘context’. One respondent summed up this view as follows:

Context and interpretation is everything in our industry, and this is something that AI technologies will struggle to duplicate. We cannot let our audiences think that we have outsourced this critical function to technology.

Some argued that AI should be kept out of editorial work altogether. Others argued that transparency requires readers to be informed when content is produced using AI, but as the LSE report notes:

It is important to note that today it is almost impossible to perform journalistic duties without using AI technologies in some way, however minor. So it is not clear where the line is drawn between an AI-assisted production process that requires disclosure and one that does not.

North-South AI divide

The LSE study went out of its way to survey media organisations from the Global South on their use of AI. Many media organisations in the Global South are resource constrained, many operating as public interest, not for profit news sources to counter state-controlled media. AI was seen as a way to boost their capability to produce news, for example to translate stories into the 200 languages of the Indian subcontinent.

However, respondents from the Global South identified the following challenges, tracing back to the fact that AI is largely trained on data from the Global North  (and English language content at that): 

  • algorithmic bias is a potentially larger problem for content in languages other than English: as one respondent said “AI generated models are built on databases that include bias especially when it comes to content in Arabic and this will be reflected in the AI generated content.”

  • voice AI tools do not sound like Africans, [they are] not authentic at all ”.

Also, the low level of trust of AI can be exacerbated in countries with authoritarian regimes where some governments have been quick to deploy AI-powered content to reinforce their control of traditional media. One respondent said of ‘Allam’, a Saudi Government-developed chatbot similar to ChatGPT:

This is a local model, do we trust the datasets used by Arab state institutions? [One wonders] if the datasets used were balanced or representative or if the data were manipulated? Unfortunately, this is one of the issues we deal with regionally. We don’t have pan-Arab models created by independent Arab institutions whose choices when it comes to training datasets can be trusted.

Conclusion

The global CEO of News Corp has described the use of AI in journalism  as potentially producing “maggot-ridden mind mould.” The LSE study takes a more nuanced view:

Journalism is a special practice. On the one hand it is around the world a sector under great commercial, political and competitive pressure. It is weak in resources compared to the giant corporations developing this technology. The potential for deep structural threats to journalism in the future must be part of our thinking now. On the other hand, news organisations have shown remarkable resilience and innovation in sustaining and sometimes thriving despite the challenges they have faced. It might even be that in a world where genAI is such a power, for ill as well as good, public interest journalism will be more important than ever.