• MediaMorph
  • Posts
  • MediaMorph Edition 50 - by HANA News

MediaMorph Edition 50 - by HANA News

Surreptitious AI use: Lack of leadership is the issue

MediaMorph Edition 50 - by HANA News

Surreptitious AI use: Lack of leadership is the real issue

Was this newsletter forwarded to you? Sign up here

The written-by-a-human bit

KPMG conducted a monster global survey last quarter (48,340 respondents in 47 countries) on Trust, attitudes, and the use of artificial intelligence: A global study 2025.  The survey presents an interesting problem to solve. Over half the respondents admitted to using AI surreptitiously, and we can safely assume that the number is even higher.

“While the rapid adoption of AI is delivering benefits, many employees are using AI in complacent and inappropriate ways, increasing risks for organizations and individuals and raising quality issues. For example, almost half admit to using AI in ways that contravene organizational policies and uploading sensitive company information, such as financial, sales, or customer information, to public AI tools. Three in five report they have seen or heard of other employees using AI tools in inappropriate ways. Two in three report relying on AI output without evaluating the information it provides, and over half say they have made mistakes in their work due to AI. What makes these risks even more challenging to manage is that over half of employees avoid revealing when they use AI to complete their work and present AI-generated content as their own. These findings highlight a lack of transparency and accountability in the way AI, particularly generative AI tools, are being used by employees at work.”

It would be easy to “tut-tut” such behaviour, known as secret cyborging or BYO-AI. It clearly puts company IP at risk. However, the failure is not with the employees but with leadership, who have failed to implement the right strategies, guardrails, training, and upskilling. The report found only 60% of organisations have an AI strategy in place, and only 54% have a responsible use policy. To compound the problem, the consumer tools being used privately on weekends are far and away better than the subpar enterprise solutions on offer.

Leadership needs to clarify and communicate policies and offer appropriate training, not just the basics, but increasingly around managing teams of agents.

For newsrooms, Thompson Reuter’s Three steps to an AI-ready newsroom: A practical guide to responsible policies is not a bad place to start.

At Mathison, we have guided media clients on this journey by crafting training, policies and processes that allow colleagues to emerge from the shadows and share success stories. Wouldn’t you rather know?

Mark Riley, CEO Mathison AI

Take Hana.news for a spin today and publish your own article summaries on any topic

Hi

AI will be a substantial competitive advantage for those who master it. But most are still struggling with adoption because rolling out a chatbot or copilot isn’t enough to get people using it.

At Mathison AI, we are helping CEOs like you rapidly discover and prototype high-impact AI use cases tailored to your business.

We’re currently working with global and local enterprises to identify areas where AI can drive real operational value — from automation and cost savings to entirely new product ideas.

If you’re exploring AI and want a clear, low-risk way to get started, I’d love to share how we approach it through fast education sessions and hands-on prototyping.

Book a call today

Mark Riley [email protected] 

AI and Media and Journalism

Three steps to an AI-ready newsroom: A guide to responsible policies

Trust - 

This practical guide helps newsrooms navigate the ethical risks of AI applications by emphasising transparency, accountability, and fairness. It encourages collaboration among journalists, technologists, and ethicists to foster ethical awareness and maintain public trust while leveraging AI technologies.

Read more at Trust (1 min)

The Impact Of Artificial Intelligence On Press Freedom And The Media

Forbes - May 4, 2025

Press freedom is under siege globally, with journalists facing violence and censorship, while AI technology exacerbates these challenges by spreading misinformation and facilitating targeted attacks, particularly against women. On World Press Freedom Day, Mr. Türk emphasised the need for international action to protect journalists and ensure transparency in AI's impact on media.

Read more at Forbes (4 mins)

“The article will die, should die, but storytelling will not”: Notes from the Nordic AI in Media Summit

Niemanlab - 

Last week in Copenhagen, a diverse group of media professionals gathered to discuss the future of journalism amid AI advancements, focusing on enhancing reporting, audience engagement, and ethical considerations. The event underscored the importance of collaboration within the industry to navigate the complexities of AI integration while upholding journalism's core values.

Read more at Niemanlab (1 min)

Journalism facing new threats from AI and censorship

UN News - May 2, 2025

Volker Türk underscored the vital role of independent media in fighting disinformation amid global challenges, stressing the urgent need for states to protect journalists from violence and harassment while calling for greater transparency in AI and tech platforms' impact on journalism. He announced a collaborative effort with UNESCO to guide tech companies in assessing the risks posed by their technologies.

Read more at UN News (3 mins)

Chaos and Credibility: A Snapshot of How AI Is Impacting Press Freedom and Investigative Journalism

Gijn - 

World Press Freedom Day underscores the critical challenges facing journalism today, as press freedom plummets globally amid rising authoritarianism and the disruptive impact of AI. While artificial intelligence offers tools to enhance investigative reporting, it also poses threats like misinformation and job displacement, prompting urgent discussions on accountability and integrity in the media landscape.

Read more at Gijn (13 mins)

Journalism industry embraces AI in next stage of news coverage

Cronkite News - May 1, 2025

The rise of AI in journalism sparks a heated debate over its benefits and risks, with experts advocating for its use as a supportive tool rather than a replacement for human journalists. While some organisations have faced challenges with AI-generated content, others demonstrate its potential to enhance journalistic practices, emphasising the need for thoughtful integration and oversight.

Read more at Cronkite News (6 mins)

Echoes & Algorithms: The Risks and Realities of AI in Journalism — Weave News

Weave News - April 30, 2025

AI is reshaping journalism, offering both efficiencies and ethical dilemmas for grassroots media, as tools like ChatGPT risk overshadowing unique voices and perspectives. The Algorithmic Literacy for Journalists initiative advocates for critical engagement with AI to maintain editorial autonomy and uphold core journalistic values, ensuring technology enhances rather than undermines independent reporting.

Read more at Weave News (4 mins)

Artificial Intelligence and the Future of Journalism: Risks and Opportunities

United Nations Western Europe - May 2, 2025

On World Press Freedom Day, we explore how AI transforms journalism by streamlining tasks and enhancing data analysis while posing risks like disinformation and deepfakes that threaten credibility and public trust. As the industry navigates economic pressures, accurate and human-centered reporting remains essential to upholding democracy.

Read more at United Nations Western Europe (4 mins)

Americans largely foresee AI having negative effects on news, journalists

Editor and Publisher - April 29, 2025

A recent Pew Research Center survey reveals that nearly half of Americans are worried about the negative effects of artificial intelligence on news production and consumption, with only 10% anticipating a positive influence. This growing concern highlights broader anxieties about the future of journalism as AI technologies become more prevalent.

Read more at Editor and Publisher (1 min)

AI and Academic Publishing

Academic publisher Wiley calls for 'ethical and legal data sourcing practices within AI'

The Bookseller - 

Wiley has taken a strong stance against the illegal scraping of copyrighted content for AI development, urging developers to obtain authorization and ensure proper attribution. The publisher emphasizes the importance of ethical data-sourcing practices and has released guidelines to support authors in navigating emerging technologies while fostering collaboration with ethical AI developers.

Read more at The Bookseller (2 mins)

Academic publishing today: what you need to know

THE Campus Learn, Share, Connect - April 29, 2025

The evolving landscape of academic publishing, influenced by open access and generative AI, presents both challenges and opportunities that require essential skills in writing, editing, and strategic publication. This guide highlights the importance of mastering these skills, alongside insights from experts on enhancing research impact and navigating the complexities of modern publishing.

Read more at THE Campus Learn, Share, Connect (10 mins)

Wiley issues position statement on AI content "scraping"

Research Information - April 29, 2025

Wiley has reaffirmed its dedication to intellectual property rights and ethical practices in AI, advocating for responsible data sourcing and collaboration among authors, scholarly societies, and AI developers. The publisher highlights its flexible licensing frameworks and successful partnerships with like-minded AI developers to promote transparency and proper attribution in the evolving landscape of artificial intelligence.

Read more at Research Information (2 mins)

"What I noticed raised my concerns" - why using AI in scientific peer review is fraught with danger

Startupdaily - 

Timothy Hugh Barker emphasises the importance of transparency in using AI for peer review, arguing that authors should be informed about AI's involvement to maintain the integrity of academic publishing. This awareness is crucial for fostering ethical discussions around accountability and bias in the review process.

Read more at Startupdaily (1 min)

The Great AI Rubbish Heap

Infotoday - 

William Badke's critique of generative AI highlights its potential to overwhelm the information landscape with low-quality, unreliable content, raising critical concerns about "model collapse" and students' reliance on AI for research. As AI-generated material increasingly infiltrates academia and beyond, it threatens to undermine critical thinking skills and the integrity of human knowledge.

Read more at Infotoday (9 mins)

AI In Academia: Tool Of Future Or Threat To Integrity?

New Sarawak Tribune - April 27, 2025

AI is revolutionising education, with applications like personalised learning platforms and automated grading systems enhancing student outcomes and research efficiency. However, its integration raises ethical concerns, prompting the need for AI literacy and equitable access to ensure that educators remain vital facilitators in an evolving academic landscape.

Read more at New Sarawak Tribune (6 mins)

Meditation and Critical Thinking are the ‘Key to Meaningful AI Use’

Fox2now - 

As AI becomes integral to our lives, experts urge the development of meditation and critical thinking skills to enhance mindfulness, manage stress, and critically evaluate information. This balanced approach empowers individuals to navigate an automated society with clarity and insight.

Read more at Fox2now (1 min)

This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here