MediaMorph Edition 71 - by HANA News
AI boom or AI bubble?
Was this newsletter forwarded to you? Sign up here
The written-by-a-human bit
There is plenty of debate right now about AI boom vs. AI bubble - the best primer on this is Azeem Azhar’s The Exponential View blog post - TLDR: the dashboard is mostly flashing green and amber, and parallels with other bubbles are generally unhelpful.
However, the Nvidia $100 billion funding pledge to OpenAI looks bubbly to me. Vendor financing rarely ends well and is potentially anti-competitive.
Our view is that the current numbers don't stack up if AI is purely an efficiency tool - the total opportunity through efficiencies alone doesn't exist to justify the $ 0.5 trillion investment. You would have to fire a quarter of the white-collar workforce. The numbers only stack up if AI is also an enabler, creating new business lines.
AI needs to be moved out of the efficiency bucket into the new product bucket - enlightened companies will be innovating with AI to drive incremental revenue. That's where we at Mathison are pushing our more ambitious clients (and ourselves).
For media and publishing companies, this means identifying AI opportunities that align with their current products and brand values. Some ideas that come immediately to mind:
Recipes → build a personalised recipe tool based on who is coming to dinner
Fitness → build bespoke training plans using past articles and fitness goals
Book recommendations → build a tool based on your current reading habits (see our own shelfimage.co.uk)
Travel Section → build a holiday planner in your own brand voice, leaning on previous popular articles
Cybersecurity → allow subscribers to build their own alert tools and niche newsletters based on their own industry (see our own app.hana.news)
All these should be fun, informative, innovative products that are easy to test with small samples. In-house models can then be trained to be loyal and consistent with your existing brand voice and personality.
Twenty years ago, newspapers were asleep at the wheel when classifieds were stolen from under their noses. This time round, the bolder and more ambitious legacy companies have a chance to win.
Mark Riley, CEO Mathison AI
Available for AI keynote speaking engagements - contact [email protected]

Available for AI keynote speaking engagements: At Melbourne Business School, speaking to senior editors on the pilot Digital News Academy, June 2025
Hi
AI will be a substantial competitive advantage for those who master it. But most are still struggling with adoption because rolling out a chatbot or copilot isn’t enough to get people using it.
At Mathison AI, we are helping CEOs like you rapidly discover and prototype high-impact AI use cases tailored to your business.
We’re currently working with global and local enterprises to identify areas where AI can drive real operational value — from automation and cost savings to entirely new product ideas.
If you’re exploring AI and want a clear, low-risk way to get started, I’d love to share how we approach it through fast education sessions and hands-on prototyping.
AI and Journalism
Is There a Middle Ground in the Tug of War Between News Publishers and AI Firms? Part 2: Framing Solutions The conflict between publishers and AI developers over the use of copyrighted material in training models calls for balanced policy solutions. By establishing clear fair use guidelines, creating licensing frameworks, and encouraging collaboration, both parties can achieve fair compensation while fostering innovation in AI technology. |

Star Tribune’s AI lab transforms breaking news coverage with human-AI collaboration MediaCopilot Substack - September 29, 2025 The Minnesota Star Tribune’s AI Lab leveraged artificial intelligence to rapidly translate over 150,000 words of a shooter's manifesto following the tragic Minneapolis church shooting, ensuring accuracy through human verification and expert consultation. Their innovative approach combined generative AI with editorial oversight, enabling journalists to draft news stories while efficiently addressing significant translation challenges. |

Will artificial intelligence be the death of journalism? Index on Censorship - September 24, 2025 Recent discussions on AI in journalism have revealed both its potential benefits and significant risks, with critics warning of the potential for misinformation and erosion of public trust. As generative AI technology improves, experts point out the need for human oversight to ensure ethical reporting and accountability in news media. |
Helping protect journalists and local news from AI crawlers with Project Galileo The Cloudflare Blog - September 23, 2025 Cloudflare's Project Galileo is expanding to offer free Bot Management and AI Crawl Control services to around 750 participants, including journalists and non-profits, aiming to protect vital civic voices amid challenges posed by AI-driven information sources. This initiative seeks to help news organisations monitor AI crawler interactions, safeguard their content, and negotiate fair compensation, ultimately supporting independent media and quality journalism in an increasingly digital landscape. |
AI for Journalists and Content Creators: From Understanding to Application Poynter - AI is revolutionising the media landscape, pushing journalists and creators to quickly adapt their workflows and skills to produce high-quality content at unprecedented speeds. As professionals embrace these new technologies, they must also grapple with ethical challenges like misinformation and bias to maintain journalistic integrity. |

Free online course for journalists: Use Google AI tools to improve workflow and engage audiences LatAm Journalism Review by the Knight Center - September 24, 2025 Join the Knight Center for Journalism in the Americas and Google News Initiative for a free bilingual online course, "Google AI Tools for Journalists," running from Oct. 20 to Nov. 16, 2025. This four-week program equips media professionals with practical strategies to integrate AI into their workflows, enhancing content creation and audience engagement while ensuring ethical practices. |
OpenAI Partners with CNA for Ethical AI in Journalism Innovation WebProNews - September 23, 2025 OpenAI has partnered with Channel NewsAsia to enhance journalism through AI integration, demonstrating that technology can support rather than replace human oversight. This collaboration aims to address challenges in the media sector while promoting the ethical use of AI amidst concerns about misinformation and journalistic credibility. |
CNA is transforming its newsroom with AI Openai - In a recent conversation, CNA Editor-in-Chief Walter Fernandez discussed the importance of journalistic integrity amid rapid technological changes and the challenge of combating misinformation. He highlighted CNA's commitment to accurate reporting, innovative storytelling, and diverse representation, which fosters informed public discourse and builds trust with audiences. |
Editorial | Good journalism matters more than ever in age of AI and fake news South China Morning Post - September 26, 2025 World News Day unites over 900 newsrooms worldwide, including the South China Morning Post, to champion fact-based journalism and combat misinformation in an era of technological disruption. As journalists navigate these challenges, their role in delivering reliable information remains crucial for fostering well-informed communities, particularly in places like Hong Kong, where credible news is essential amid rising falsehoods and AI influence. |
Why journalists have a competitive edge in the age of AI Hiring journalists for AI roles can greatly enhance teams, as their critical thinking, research skills, and storytelling abilities enable them to uncover insights and communicate complex data effectively. Their adaptability and ethical lens ensure a well-rounded approach to developing robust AI solutions. |
AI and Academic Publishing
ChatGPT is blind to bad science Lse - A recent study reveals that ChatGPT often fails to recognise retraction notices in research papers, raising concerns about the accuracy of information it provides. This oversight could lead to the dissemination of outdated or inaccurate scientific findings, underscoring the urgent need for AI systems to incorporate mechanisms that ensure the reliability and trustworthiness of information. |

We must set the rules for AI use in scientific writing and peer review Times Higher Education (THE) - September 29, 2025 Recent studies reveal a troubling rise in AI usage in academic publishing, with significant instances of retractions due to undisclosed AI involvement and concerns over reviewer fatigue. Experts are calling for urgent ethical guidelines to ensure the integrity of research as AI tools increasingly permeate the peer review process. |
Wiley Enhances Research Platform with AI Automation Wiley has enhanced its Research Exchange publishing platform with AI automation to streamline the academic publishing process, featuring AI-powered transfer suggestions for rejected manuscripts and preserved peer review feedback. These innovations aim to improve efficiency while complementing human expertise in scholarly publishing, with over 1,000 journals already benefiting from the platform. |

Technology licensing boosts academic productivity at Stanford Stanford - A study by Kate Reinmuth reveals that Stanford's technology licensing not only fuels innovation but also enhances academic publishing among inventors, debunking concerns that commercialisation detracts from research. With over 170 patented products generating significant revenue and fostering collaborations, the Office of Technology Licensing showcases the university's pivotal role in translating taxpayer-funded research into public benefit. |
Weekend reads: U.S. EPA tells scientists to stop publishing; ‘unreliable’ Tylenol research; Alzheimer’s paper retracted Retraction Watch - September 27, 2025 This week at Retraction Watch, we explored critical issues in academic publishing, including the challenges of peer review integrity, the retraction of a controversial study on apple cider vinegar, and ongoing concerns about publication fraud across disciplines. Additionally, discussions on scientific misconduct and the influence of AI on retracted papers highlight the pressing need for reform in research practices. |

AI models are using material from retracted scientific papers MIT Technology Review - September 23, 2025 Recent studies highlight the troubling tendency of AI chatbots, including ChatGPT, to reference flawed research from retracted scientific papers, raising concerns about their reliability in medical advice and literature reviews. Experts urge the incorporation of real-time retraction data and contextual information to enhance accuracy, while users are encouraged to remain vigilant and conduct their own checks on AI-generated content. |

The Future of Content Licensing: How RSL Bridges Publishers and AI StupidDOPE | Est. 2008 - September 26, 2025 The Really Simple Licensing (RSL) initiative is empowering publishers to reclaim control and compensation for their content used by AI companies, offering a structured framework for licensing that promotes fair attribution and revenue opportunities. With support from major platforms and a collective approach, RSL aims to balance the digital publishing landscape while fostering responsible innovation in AI. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy