As artificial intelligence sweeps offices, Khaleej Times presents a prognosis of the future where a new culture is taking roots
Stock photo used for illustrative purposes
Newsrooms are at the cusp of the artificial intelligence (AI) curve, a paradigm shift unlike any other. Along with the web, mobile, and social media transformations, which are still rewriting journalism, AI represents a fundamental shift in the way we think, create and work. Journalism will never be the same again. To understand the nature of the AI revolution sweeping publishing, KT’s Vinay Kamat spoke to Dr Mario Garcia who has tracked and analysed the evolution of media over four decades.
"AI can be of tremendous assistance to humans for a variety of tasks, but that process begins with human input, requires human supervision, and ends with human evaluation and amendment," says Dr Mario Garcia.
Dr Garcia, who's CEO/Founder, García Media, and Senior Adviser on News Design/Adjunct Professor, Columbia University Graduate School of Journalism, has been involved in designing content strategies for a whole host of global media brands, including Khaleej Times. He has been busy shaping his new book AI: The Next Revolution in Content Creation, which he has been researching for more than two years, when KT caught up with him. Always an optimist, he says: “AI is a fascinating development, one that some equate to the inventions of printing and the Internet. It’s a dance between humans and AI.”
Q) Dr Garcia, you have been tracking the AI-driven shifts transforming newsrooms and, in our conversations, you seem to be excited with what’s happening. What brings in such deep optimism?
I have always been an early adapter of new technologies, especially those who facilitate how we communicate and consume information. AI represents a major revolution for how we will research information, draft stories and the way we carry out some of the tasks now carried out by humans. At the same time, my new book is not a love letter to AI. It is a guide for acceptance of the inevitable and a salute of respect for the human endeavour. AI holds a promise of revolutionising the way we think, create, and share knowledge. It can analyse vast amounts of data, uncover hidden patterns, and generate novel ideas with efficiency. Yet, AI is ultimately a creation of the human intellect. I am excited about what I refer to as the “dance” between humans and AI. While they are on the dance floor, they are not locking arms just yet, but the dance has definitely begun.
Q) Journalism has been in a state of flux since the advent of the Internet. While the tenets of journalism—being factual, being balanced, being complete, being ethical—will always remain intact, what are the new tenets?
Those tenets of journalism remain intact and, in fact, have never been more important. However, AI adds new layers of concern. Journalists appear particularly concerned about what some refer to as the dark side of algorithms: bias and discrimination. AI systems can be biased and discriminatory, reflecting and amplifying the biases present in the data they are trained on. This can result in unfair decisions in areas such as hiring, lending, and criminal justice.
There are privacy concerns as well. AI’s impact on Surveillance and Data Security and the erosion of privacy rights, as well as the potential for misuse of personal data collected by AI systems for surveillance and targeted advertising. Not to mention the so called “hallucinations”, when AI goes a bit crazy and takes a detour into dark corners that have no footing on reality. That is a major concern for journalists for whom inaccuracies are at the top of what is not acceptable. Inaccuracy is perhaps the greatest limitation associated with generative AI. This is especially true among journalists, who often express particular concerns about AI-generated content and its potential inaccuracies. I would say that the new tenet with AI is this: Embrace AI for how it can help you as a journalist to access data and move faster through some traditional reporting tasks, but make it a point to have a human verifying all that AI creates at the end of the process.
Q) Obviously, AI is set to take over routine newsroom tasks, or data-heavy work, leaving creative work to journalists. How are newsrooms across the world preparing for this transition to a commodity-and-creativity framework?
I sense that editors in newsrooms around the planet are well aware of the tremendous importance of AI, but how they approach it and welcome it varies. I see three major trends: a) A large number of newsrooms have an informal approach to AI, allowing those journalists who are interested in AI to experiment with it, but not forcing it upon everyone; b) Some editors are embracing AI with open arms and making it mandatory that all journalists get involved with AI in some form; c) There are some newsrooms engaging in a “wait and see” mode, unsure of how AI has a place within their work strategies, with more fear than positive expectations.
Q) Much of the current debate hovers around the future structure of newsrooms. How would an AI-driven newsroom look like? Bottom-heavy with more writers? Top-heavy with more tech-savvy editors?
Journalists and news organisations are exploring ways to use AI responsibly. They emphasise the importance of transparency, accountability, and human oversight in AI systems. Fact-checking and verification processes remain essential, and journalists are adapting their skills to critically analyse AI-generated content. This is where creating guidelines and protocols is essential for AI use in newsrooms. Some newspapers have already created “AI exploration groups”, and The Financial Times has been the first to appoint an Artificial Intelligence editor. AI can only do about 19 per cent of what a traditional journalist can do, so I don’t anticipate that many journalists will be replaced by AI. However, if all that a person does anywhere is collect data and transcribe it into digestible bits of information, those jobs will be lost. Remember, a reporter has feelings, emotions and can make emotional connections, AI can only give you data. Indeed, AI can draw data derived from 500 billion tokens of text. That is important to remember for anyone thinking that human reporters can be replaced. Not yet, I would say loudly.
By Dr Mario Garcia
Q) Much of AI will focus newsrooms on understanding the audiences better. That’s a given. We have seen big gaps in newsrooms in fully understanding users. Having experienced interactivity and personalisation on social media platforms, which are constantly versioning, will audiences demand similar experiences?
Al has been used for years already for gauging audience profiles, knowing who is reading which articles with what frequency and at what time each day. I would say that AI first came into newsrooms via marketing departments, to analyse audiences, to inform editors about content that is preferred, to evaluate engagement time. Now AI will spill into the actual creation of content. As more members of our audience engage directly with AI, as with ChatGPT, I think they will become savvier. But I also think that a byline that indicates a story was written by a human will carry great value in this AI environment. In April 2023, Chris Moran, head of editorial innovation at The Guardian, wrote: “We haven’t yet announced a new format or product built on generative AI. Instead, we’ve created a working group and small engineering team to focus on learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organisations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use.”
Q) During the last few years, we have been trying to reconcile three things: What the editor thinks right, what the advertiser demands, and what the audience finds engaging. Will we see greater convergence between the three in an AI-driven content ecosystem?
This will probably happen, and is already happening. One can’t be a journalist today and NOT be connected to the technology that allows how content is created and how we utilise data that is available about audience and engagement. AI will facilitate these processes. As a new semester begins for me as adjunct journalism professor at Columbia University’s School of Journalism, I consider it my obligation to introduce AI on the first day of class, to be transparent about how I use it, and how they, the students, can use it advantageously, always revealing the use of AI in their stories and projects. I see AI as a well-informed colleague sitting by my side, but one that I must always double or triple check to make sure that what it offers me is accurate and not biased. If technology has been important to journalists since the start of the mobile era five years ago, today AI makes greater demands in understanding the synergies between journalism and technology.
Q) What could be the pain points in a symbiotic relationship between AI and human journalists? Are we talking about creating a new people culture?
Publishers worry, too, that now that they have finally emerged as digital, here come the chatbots, with artificial intelligence tools from Google and Microsoft that give answers to search queries in full paragraphs rather than a list of links, making it more interesting and shareable. The world of AI represents a new culture, or frontier, to conquer. It is not a culture that all embrace, but journalists need to dip their toes into that world, even lightly, to get to know it. Five years from now AI will not be a “love it or leave it” type of choice. By then, I think we will have overcome the doubts and fears of AI that we have today, and will have come to accept it the way we do the Internet today. The main mantra of my new book on AI is this: AI can be of tremendous assistance to humans for a variety of tasks, but that process begins with human input, requires human supervision, and ends with human evaluation and amendment.
Q) We are still at the beginning of the AI curve. We do not know what kinds of biases will creep into content creation and delivery. How transparent should a news brand be with its audiences? Should publishers provide information about AI sources and AI tools used to create content? Even if there are disclaimers, audiences will always set the bar high. Gatekeeping has to be water tight.
Total transparency is key, and the only way to go. All news organisations need to create guidelines already for how their teams approach AI. Of course, this is a subject in its infancy, so be prepared to update those guidelines as you go. Nobody is an expert on AI. AI use may vary from newsroom to newsroom, so create an AI task force, and develop publishable guidelines to inform your readers how AI may be utilised, or how it would never be used. It is a good idea to draft a simple, specific set of guidelines for the use of AI. It should include tips, pros and cons, dos and don’ts, etc., and it should be updated frequently. For example, WIRED magazine, which was one of the first titles worldwide to publish its AI guidelines, included this:
Q) AI could also be a big differentiator. So, does it make sense for publishers to develop content-specific AI inhouse? Will that be the next big leap for media houses?
I think that each publishing house will find its own best way around AI, and for some it could be to use AI at first for data generated content, such as sports scores, market statistics, the weather. This is already happening. The Associated Press produces about 3,000 stories a day using AI natural language generation. Readers know that such content is AI generated, an important point to remember.
Q) What will an editor in the new AI paradigm look like? Will he be more of a gatekeeper than an opinion-maker? Will he eventually be a bridge between high technology and journalistic integrity? What should be his topmost priority?
The editor in the new AI paradigm will continue to be a gatekeeper, but now, not just as a sheriff directing the traffic of human interactions, biases, controversial topics, pros and cons of a story. In addition, the new editor will be “dancing” with robots, with their arsenals of data, ability to weave words from 175 million parameters, but also with tendencies to take hallucinatory detours, or succumb to biases.
The job of an editor in the AI landscape is more challenging, but also more fun. Smart editors will use AI for what it can do best: convey data fast, summarise, offer story ideas, then will realign the work of humans, using them for what they offer that AI does not: Humans, with their creativity, intuition, and emotional intelligence, bring a depth of understanding and subjective interpretation to the dance. They possess the ability to imagine, to dream, and to express themselves through art, literature, music, and various forms of creative expression. Human creativity fuels the initial spark, the inspiration that sets the stage for the dance.
On the other hand, AI brings computational power, data analysis, and pattern recognition to the partnership.
Finally, at 76, I look at artificial intelligence with the curiosity of a toddler, the amazement of a Baby Boomer, and the sceptical eye of a media professional. But I’m also a consumer of content, an academic, a grandfather. And over the course of my five-decade career in visual journalism I have always been an “early adopter” of the various tech developments that have shaken and blessed the media industry. Today, I am lucky to have an insider’s view into newsrooms across the world and access to some of the best minds experimenting with AI.
vinay@khaleejtimes.com
ALSO READ: