- MediaMorph
- Posts
- MediaMorph Edition 48 - by HANA News
MediaMorph Edition 48 - by HANA News
7 AI blockers and how to navigate them
MediaMorph Edition 48 - by HANA News
Was this newsletter forwarded to you? Sign up here
The written-by-a-human bit
During times of turmoil, it will be natural to postpone investment decisions until the skies clear. Investment in AI, however, should be a clear lever for growth and efficiency to pull during times of uncertainty. So, what is holding leaders back? Here are my observations, plus some mitigations.
Lack of discernible, provable ROI - as I often remind my clients, AI is not software. It’s not plug and play, and can take time to train. The first version is the worst version. This can make CFOs nervous, and leaves many pilots in the lab, starved of the time and data to improve. De-risk by building discreet sandbox experiments to test a hypothesis, then back the winners. Give each project a champion. Test in the wild as soon as possible.
Top down vs bottom up are not joined up - while it’s good to have strategic insight and a PE/VC mindset for some bigger bets (top down), it’s vital to coordinate with shop floor tools and adoption (bottom up). There is no point building a bespoke AI CMS if your newsroom is already using off-the-shelf solutions.
Fear of hallucinations – pertinent for media companies, and one reason why Apple is so far behind the AI arms race. Interestingly, OpenAI’s latest release, o3, seems to be a repeat offender. Clearly, the more sophisticated a model is in terms of reasoning, the more prone it is to go off piste. Thirty-two per cent of leaders believe trust in the accuracy and fairness of AI outputs will now be the greatest society-wide challenge with AI between now and 2030, according to KPMG’s quarterly AI Pulse survey.
Mitigate by clear human-in-the-loop practices, use dedicated RAG techniques (eg newsroom archives) and introduce AI fact-checkers like Full Fact. Finally, demand source attribution, as we always do at Hana News.
Fear of job losses – an interesting thought experiment – what if you can now do your existing role in half the time? Do you double your productivity and ask for a pay rise? Or take the dog for a second walk each day (my ‘fit dog” theory of AI)? Or do you keep quiet? Or do you stop hiring? The optimist answers that we will all upskill and do more interesting, human-centric tasks. The sceptics will need to be won over. Augmentation: good. Redundancy: bad.
It’s moving too fast - caught in a head spin by classical AI, Generative AI, Agentic AI? It may feel prudent to sit out the latest wave and wait and see. Tell that to Kodak.
Opaque guardrails and frameworks - if your AI usage policies are unclear, overly restrictive or out of date, paralysis will follow. (Poynter kindly shared its excellent template here.) “Proceed with caution” is a great mantra and encourages openness and sharing, which comes back to good comms.
Poor processes – good processes build trust. Possess recognised champions and comms channels to spot opportunities, share success stories and test hypotheses. Take charge, stay in charge – I don’t think there is an AI for that.

Mark Riley, CEO, Mathison AI
Hi
AI will be a substantial competitive advantage for those who master it. But most are still struggling with adoption because rolling out a chatbot or copilot isn’t enough to get people using it.
At Mathison AI, we are helping CEOs like you rapidly discover and prototype high-impact AI use cases tailored to your business.
We’re currently working with global and local enterprises to identify areas where AI can drive real operational value — from automation and cost savings to entirely new product ideas.
If you’re exploring AI and want a clear, low-risk way to get started, I’d love to share how we approach it through fast education sessions and hands-on prototyping.
AI and Media and Journalism
What news audiences can teach journalists about artificial intelligence Poynter - Generative AI is revolutionising journalism by enhancing content creation and personalising audience experiences, allowing journalists to focus on deeper reporting. However, this shift also brings challenges related to accuracy and bias, necessitating responsible adoption of AI technologies to maintain trust and editorial standards. |
Richard Lui to the news media: Don’t make same mistakes with AI that you did with social media The Daily Mississippian - April 21, 2025 Richard Lui, a veteran news anchor, urges the journalism industry to embrace AI responsibly to maintain ethical standards and control over content, warning that failing to do so may allow tech giants to dominate the space. Drawing from his personal journey of caregiving, Lui highlights the importance of integrating AI with journalistic values and securing exclusive content rights to shape the future of news. |
Building a foundation with AI to jumpstart your journalism International Journalists' Network - Incorporating AI tools like Google's NotebookLM and Bellingcat's Smart Image Sorter can significantly enhance journalistic efficiency, enabling deeper reporting while maintaining ethical standards. However, a 2024 survey reveals that many journalists demand transparency regarding AI use in news articles to preserve audience trust. |
Media Briefing: From fringe to frontline – AI’s fast-track rise in newsrooms Digiday - April 17, 2025 AI is transforming journalism by supporting reporters with tools for transcription, translation, and content creation, enhancing efficiency while maintaining journalistic integrity. Major organisations like Reuters and The Independent are leveraging AI to streamline workflows and focus on high-value investigative work, reshaping editorial roles in the process. |
Central Asian Media Forum Spotlights Journalism at Crossroads: Trust, AI, and Battle for Credibility The Astana Times - April 16, 2025 The second Central Asian Media Forum in Astana brought together leading media figures to explore the evolving landscape of journalism amid AI advancements and declining trust in news. Discussions highlighted the need for integrity, responsible technology use, and the importance of fostering connections between diverse audiences. |
World Press Freedom Day 2025 – South Asia Regional Conference Unesco - April 21, 2025 The World Press Freedom Day 2025 Regional Conference in Kathmandu will bring together journalists and advocates from South Asia to address the transformative impact of AI on media, particularly its effects on press freedom, gender dynamics, and the challenges faced by women journalists. With a focus on AI governance and digital safety, the conference aims to enhance journalistic practices while tackling systemic biases and disinformation in an evolving media landscape. |
AI Moves to Page One China Media Project - April 18, 2025 Guangzhou's Southern Metropolis Daily has embraced AI with 36 AI-generated covers this year, reflecting a significant shift in journalism amid challenges like declining print circulation. However, concerns about the future of journalists and strict political controls remain, highlighting the tension between technological innovation and authoritarian oversight in China's media landscape. |
AI Needs Your Data. That’s Where Social Media Comes In. Top AI companies are increasingly leveraging social networks to access vast user data, enhancing their algorithms and applications while raising ethical concerns about privacy and data ownership. This trend underscores the need for regulatory frameworks to ensure responsible AI development amid intensifying competition in the sector. |
Publishers lead the pack in AI adoption for media campaigns EMARKETER - April 21, 2025 A recent IAB report reveals that only 30% of ad industry professionals have fully integrated AI into their campaigns, primarily using it for audience segmentation and inventory forecasting. However, 62% face hurdles like complex setups and data security risks, urging marketers to assess their AI maturity as adoption rates rise among agencies and publishers. |
OpenAI is working on X-like social media network, the Verge reports Reuters - OpenAI is reportedly developing a social media platform akin to X (formerly Twitter), aiming to enhance user engagement and explore new tech influences. While details on features and launch timelines are still unclear, this initiative could transform how AI technologies interact with social media, potentially improving user experience and content moderation. |
Why AI Is the New Social Media TheWrap - April 18, 2025 Mark Zuckerberg has highlighted a notable shift in user engagement on Meta's platforms, with only a small fraction of time spent on friends' posts, prompting the rise of generative AI tools like ChatGPT and Claude. These AI applications are transforming how people discover information online by offering curated content and interactive conversations, moving away from traditional social media feeds. |
AI and Academic Publishing
Can artificial intelligence make research more open? The text discusses how open information exchange, particularly through generative AI, can enhance scientific collaboration and streamline research practices, addressing challenges in data sharing and academic culture. By focusing on researcher-centric solutions, AI can transform the landscape of open science, making it more accessible and equitable while empowering various stakeholders in the research ecosystem. |
Scientific journals should not charge to publish response articles Times Higher Education (THE) - April 16, 2025 A recent response article challenging a flawed study on conservation in the Western Canadian boreal forest highlights the significant financial barriers imposed by academic journals, which charge steep fees for publishing critiques. Arguing for a model that encourages free, peer-reviewed responses, the author stresses the need for rigorous discourse to correct scientific errors and enhance trust in research. |
Seattle startup Potato lands $4.5M to automate science experiments using AI assistants and robots Geekwire - Seattle-based startup Potato has raised $4.5 million to revolutionize scientific research through fully automated experimental processes, aiming to enhance efficiency and accessibility in experimentation. |
Women in Artificial Intelligence You Should Know About TheCollector - April 18, 2025 Explore the dynamic landscape of AI ethics through the insights of influential figures like Emily Bender, Timnit Gebru, Margaret Mitchell and Joy Buolamwini, who challenge the development of large language models and advocate for responsible practices in AI technology. Their work highlights critical issues such as bias, transparency, and the ethical implications of pursuing Artificial General Intelligence. |
Canadian authors slam Meta for training AI using 'hugely problematic' program that pirates books: 'We're just the little guys.” Yahoo News - April 15, 2025 Authors K.A. Riley and Heather Grace Stewart have voiced their outrage after discovering their works were used without consent by Meta to train AI models, raising critical ethical and legal issues in the literary industry. As they argue for accountability and fair compensation, the incident underscores growing concerns over intellectual property rights amid the rise of AI-generated content. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy