AI & Media
Tool to aid or hurt?
By Sabina Inderjit
Imagine having a super-smart assistant that can gather information, write drafts, generate videos, clone voices, et al. Incredible possibilities that artificial intelligence (AI) offers in the media. Yes, it’s transforming journalism by handling routine tasks, analysing vast data sets, and enhancing content delivery. And like any powerful tool, AI comes with its own set of ethical and legal dilemmas.
The power and peril of AI has sparked a heated discussion in the media, often referred to as the Fourth Estate–wielding influence, shaping public opinion and a watchdog of democracy.Will it help safeguard or play mischief? At the same time, AI’s rapid development has sparked fears among journalists whether they would become obsolete!
That may not be the case, as speakers during a session ‘Shaping the Future of Journalism in the AI media Era: Copyright and Ethical Challenges,’ at the World Journalists Conference 2025, organised last month by the Journalists Association of Korea would have the 70-odd participants from over 50 countries believe.
AI won’t replace journalists-it will replace journalists who don’t use AI. Good journalism relies on human traits that AI lacks: empathy, curiosity, and the ability to ask hard questions in real time. AI won’t meet informants, uncover hidden documents, or attend a press conference and challenge a prime minister. When disaster strikes, it’s human journalists who head to the scene to speak with witnesses and capture the raw, emotional truth.
The plus and minuses of the evolving media landscape shaped by AIwere spelt out by speakers from China, Poland, South Korea and the US. AI could achieve remarkable things, such as in Kunshan, East China’s Jiangsu Province, wherein police used AI to catch criminals who had swindled $145,000. The AI system traced the money in just 10 minutes and stopped half of it from being transferred. The suspects were caught, highlighting how AI can analyse data faster than any human and help resolve complex cases that might otherwise go unsolved.
On the flip side, the same power can be misused. In Kunming, Yunnan Province, a fraudster used AI face-swapping software to impersonate a victim’s friend and nearly tricked them into sending $43,500 worth of gold bars. Fortunately, police intervened in time.In another case in Beijing, a voice actor’s voice was cloned without her consent and used in audiobooks. The court ruled this a violation of her rights-a clear example of how AI can cross ethical and legal boundaries.
While AI offers substantial benefits, it also poses serious threats to privacy, identity, and intellectual property. There’s a need to strike a balance between embracing innovation and safeguarding citizens’ rights. AI use should never infringe on reputation, privacy, or image rights. It’s not just about creating regulations-education, awareness, and continuous research are vital to establishing ethical boundaries.
In early March in China’s ‘Two sessions’, most significant political meetingsmany lawmakers and experts called for deeper research and clearer AI legislation. Globally, this conversation is intensifying, particularly around copyright issues and fair use of journalistic content.
In the U.S., The New York Times has filed a lawsuit against OpenAI over alleged copyright infringement, and similar concerns are growing in South Korea. The Korean Newspaper Association (KNA) initially determined that Naver, dominant web portal and search engine, often referred to as “Google of Korea” had incorporated news content from media outlets into its AI services without proper authorisation. It plans to file formal complaints against both domestic and international tech companies, including Google and OpenAI, for using news content in AI training without proper authorization.
The KNA argues this unauthorized use violates copyright laws and constitutes an abuse of market dominance under the Monopoly Regulation and Fair Trade Act. Without clear legal frameworks, the unchecked use of news content by AI could severely impact journalism’s sustainability by eroding its economic foundations and remain a persistent challenge.
In Poland, in July 2024 there was a nationwide media protest wherein hundreds of editorial offices participated calling for changes to the copyright law. Theycalled for a mechanism to negotiate payment for content used by global tech companies and that it shouldn’t remain a legal fiction but become a real tool. Eventually a compromise was reached, albeit unsatisfactory. Negotiations are ongoing between publishers and Google, and if an agreement isn’t reached, the state administration would need to step in.
Besides, in Poland, where the political and media landscape is polarised, the ethical use of AI becomes particularly important. But this problem affects practically every country especially the global geopolitical situation– the war in Ukraine, the tense situation on the Korean Peninsula, or the massive changes in US politics. AI algorithms, which are based on data patterns and user behaviour, can deepen media fragmentation, leading to creation of information bubbles, which will only further intensify existing political divides.
The EU was the first to adopt a comprehensive AI Act, followed by South Korea. These regulations include provisions to label AI-generated content and outline prohibited uses. However, ambiguity remains around what qualifies as ‘creative input’ when AI helps write an article. The EU law won’t be fully applied until 2026, and AI’s capabilities may evolve dramatically by then.
Additionally, as traditional media loses influence in the advertising market, distribution models are shifting in ways that weaken competition and diversity in the media landscape.To address these concerns, the EU adopted the Digital Single Market Directive in 2019, requiring platforms like Google to sign agreements with publishers for content usage.
Under current EU copyright laws, training AI on content is allowed unless explicitly forbidden by rights holders. Yet publishers argue that if tech giants use their work, they should compensate accordingly. Google, for example, benefits from journalistic content while trying to position itself as a publisher-without paying for the work it leverages.This places publishers in a difficult spot: they face the dominance of tech giants while also needing to fund quality journalism.
Fortunately, they are doing so, using AI as anally and not a replacement. For example, Ringier Axel Springer Polska, one of Poland’s largest media companies, is using AI to handle some tasks-like creating localised weather forecasts during night shifts-freeing journalists to focus on meaningful work. Tools like AI-assisted article summarization help readers quickly digest key stories when they’re short on time, improving user experience.
Another example is that of The New York Times. In October 2024, in an investigation titled “Inside the Movement Behind Trump’s Election Lies,”it used AI to analyse over 500 hours of video from the Election Integrity Network. AI translated and indexed 5 million words from the recordings, allowing journalists to find recurring themes and identify key figures. But the final product was carefully verified by human reporters, and the AI usage was explained to readers.
This blend of AI efficiency and human judgment is key. Trust and credibility take years to build-and seconds to lose. Transparency, verification, and ethics must remain central to journalism. And while AI is transforming journalism, it doesn’t diminish the role of journalists-it elevates it. When used ethically and intelligently, AI enhances reporting, speeds up workflows, and allows for deeper investigative work.
To shape the future responsibly, journalists must learn how AI works, understand both its risks and rewards, and continuously update the frameworks-legal, ethical, and professional. The tools are here. It’s up to the media to use them wisely.—INFA
(Copyright, India News & Feature Alliance)
New Delhi
9 May 2025
The post AI & Media appeared first on Daily Excelsior.
News