Will Journalists become obsolete in the future? Will Artificial Intelligence (AI) take over newsrooms? And what would happen to jobs in the news industry? These are a few very contagious questions being asked after the famous ChatGPT, a conversational AI chatbot, came into the limelight.


OpenAI’s AI-powered chatbot ChatGPT was launched back in November last year. As an AI language model which is trained on large language models, it can answer general knowledge questions, provide specific information, translate texts, generate creative writing, summarise long articles, and even write software codes among other things. 


This list of usages can be very scary for many journalists working around the world, who do somewhat similar work. 


A report by the Reuters Institute for the Study of Journalism, a UK-based research center and think tank, explores these questions. 


According to the report, the use of AI in journalism is not new. Outlets around the world have been experimenting with it for some time. However, the possibility of using AI has multiplied since ChatGPT came out. Journalists themselves have been testing the capabilities of chatbots to write and edit.


What Experts Say


A computational journalist and co-founder of the real-time information company AppliedXL, Francesco Marconi, told Reuters Institute that there are three categories of AI innovation in the past decade – automation, augmentation, and generation. 


In automation, “the focus was on automating data-driven news stories, such as financial reports, sports results, and economic indicators, using natural language generation techniques.” 


In augmentation, “the emphasis shifted to augmenting reporting through machine learning and natural language processing to analyse large datasets and uncover trends.”


Now with ChatGPT, “powered by large language models capable of generating narrative text at scale,” Marconi says the focus has shifted to generating journalistic pieces. 


Professor Charlie Beckett, head of the Polis/LSE JournalismAI research project, told Reuters Institute, “AI is not about the total automation of content production from start to finish: it is about augmentation to give professionals and creatives the tools to work faster, freeing them up to spend more time on what humans do best.” 


The Excitement Around AI in Newsrooms


Newsrooms have been using AI-powered tools to automate and augment. For example, some digital portals have been using AI to show sports scores, market indicators, and other information. Global news agencies like Reuters, AFP, and AP, as well as smaller outlets, have been doing this, according to the report. 


Another use case was augmentation. For example, an Argentinian newspaper La Nación began using AI to support its data team in 2019, the report says. 


However, with ChatGPT, and other such bots, we could ask a chatbot to write a lengthy, balanced article on a subject or an opinion piece from a certain position. We could even ask it to do it in the style of a well-known writer or publication.


The Bot created by Microsoft-backed OpenAI seems to agree. 



We Asked ChatGPT, "What can you do for a newsroom"


News Outlets That Are Using AI or ChatGPT


According to the report several well-known outlets have announced their plans to use generative AI or are already incorporating it into their content.


BuzzFeed recently announced it will use AI for its personality quizzes. The New York Times used ChatGPT to create a Valentine’s Day message generator.


German publisher Axel Springer, UK publisher Reach, and Italian newspaper Il Foglio have also started exploring AI for content generation. 


The Newsroom, a company founded in 2021 By Pedro Henriques and Jenny Romano, offers AI-generated daily summaries of the main news stories. 


ABP Live has previously reported about, NewsGPT a first-of-its-kind website that offers news reports generated entirely by artificial intelligence (AI). NewsGPT CEO Alan Levy, says, "For too long, news channels have been plagued by bias and subjective reporting. With NewsGPT, we are able to provide viewers with the facts and the truth, without any hidden agendas or biases."


Does This Mean Journalistic Jobs Are Under Threat?


The answer can be ‘No’, as of now! 


Madhumita Murgia, the AI editor at the Financial Times, told Reuters Institute, “Based on where it is today, it's not original. It's not breaking anything new. It’s based on existing information. And it doesn't have that analytic capability or the voice.”


Additionally, there are also questions about the authenticity and originality of the content it produces. Francesco Marconi said, “These models often have difficulty generating accurate and factual information regarding current events or real-time data.” 


“The new crop of generative AI is not accurate when it comes to computing exact calculations. Unchecked algorithmic creation presents major risks as it relates to a healthy information ecosystem,” Marconi adds indicating how both Google’s and Microsoft’s new AI-powered tools have given factually incorrect information. ChatGPT has also pointed a reader to a reference that doesn’t exist, the report says. 


Pair this with demands of more analysis or a more developed take on a subject, something readers look for when they go on news outlets, it seems that journalists are safe as of now. 


ChatGPT also seems to agree with this. 


 



We Asked ChatGPT, "Can ChatGPT replace journalists?"


Debate Around AI Bots Using News Reports Without Permission 


As we said before, AI bots, including ChatGPT, are trained on large language models. They are trained on a set of content and data and generate output based on what it was trained on. They are not coming up with something by themselves. 


Francesco Marconi, who is also the author of "Newsmakers: Artificial Intelligence and the Future of Journalism”, in a tweet on February 15, said, “ChatGPT is trained on a large amount of news data from top sources that fuel its AI. It's unclear whether OpenAI has agreements with all of these publishers. Scraping data without permission would break the publishers' terms of service.”






Later, he also tweeted about, how, OpenAI used 45M outbound links from Reddit to train GPT2. He wrote, “Here's a list published by OpenAI of top domains from WebText (scrapped data from 45M outbound links from Reddit) used to train GPT2. My main question is whether it's fair to train AI on web content without explicit permission and attribution?”


On the other hand, OpenAI says that the use of copyrighted works for training purposes constitutes fair use. It also recognizes that AI developers may face "substantial legal uncertainty and compliance costs." 


Here is what ChatGPT said when we asked the same question.


 



We Asked ChatGPT, "Which specific news sources was chatGPT trained on? Provide a list of the top news sources in your database."


We also asked ChatGPT which specific Indian news sources chatGPT trained on. Here is what it said. 


 



We Asked ChatGPT, "Which specific Indian news sources ChatGPT trained on?"


Disclaimer: The article includes the responses given by ChatGPT (AI-driven chatbot developed by OpenAI) to various questions/questionnaire and ABP Network Private Limited (‘ABP’) is in no manner liable/responsible for any of such responses. Accordingly, reader discretion is advised.