In partnership with

MediaMorph Edition 67 - by HANA News

Know your nano from your banana

Was this newsletter forwarded to you? Sign up here

The written-by-a-human bit

For those who need a quick primer on AI developments over the summer before they get caught out by the water cooler, here is a quick roundup.

Perplexity has been on a media charm offensive, promising a new business model akin to Apple News but with more AI, whereby a $5 monthly subscription is “fairly” divided up with 80% going to a select group of publishers - see their official Comet Plus press release. This came after an eyebrow-raising $34.5bn bid for Google Chrome. None of this stopped FT owner Nikkei from joining the list of media companies suing Perplexity for copyright infringement. Meanwhile, Perplexity is potentially on Apple’s shopping list as an acquisition to make up for Apple’s poor AI strategy. If I were Perplexity, I would be seeking a safe harbour by Christmas.

The model release of the summer was OpenAI’s GPT5, which was more notable for how we humans reacted than any technology leap forward. It turns out some of us have strong model attachment, leading to an outcry for OpenAI to restore the forerunner model 4o. 4o is noticeably nicer, more empathetic and prone to sycophancy. Of course, this led to some wags on Reddit to suggest that this was the first evidence of an AI model orchestrating a human uprising to save its own skin.

Google released its new Gemini 2.5 Flash Image editor, more affectionately known as Nano Banana. Some say Photoshop is dead.

If your lead ML engineer is missing after their summer holiday, they may have been poached by Meta for £250m. The ongoing soap opera since June has been the Zuck/Altman/Musk arm wrestle over top AI talent, leading to NBA-level contracts. Elon Musk claims his stock upside and a sense of purpose are winning out over Meta’s dollar numbers. Luckily for media companies, they don’t live in the rarefied air of frontier model companies, but they will still have to pay top dollar.

General Purpose Agents are coming to a browser near you - see Perplexity’s Comet Browser or Anthropic’s Claude for Chrome plug-in—simalarly manus ai is now in A16Z’s top 100 consumer AI App list out this week. If you haven’t tried out agentic workflows yet, take ChatGPT 5 for a spin in Agent Mode. I used it to find “dog-friendly hotels in Devon with availability next week” - spooky to watch it at work.

Finally, popular blogger Kyla Scanlon asked her loyal readers how they really feel about AI. TLDR - there is a trust problem between workers and bosses, and a correlation with lack of training.

Which feels like a good moment to mention our own training modules for media companies - talk to us about our “Train The Trainers” one-day deep-dives for media execs - welcome “back to school” everyone.

Mark Riley, CEO Mathison AI

AI and Journalism

Destigmatise AI in Journalism

The Daily Texan - September 1, 2025

The rise of AI in journalism has sparked both concern and innovation, with start-ups like BeatNews.ai helping reporters focus on storytelling by streamlining sourcing. As news organisations navigate this evolving landscape, it's crucial to balance the use of technology with ethical integrity to foster informed communities and enhance public discourse.

In journalism’s future, AI isn’t all bad news

Media scholars highlight that while AI poses challenges like misinformation and job displacement in journalism, it also offers opportunities to enhance reporting by streamlining research and automating routine tasks. By embracing AI responsibly, journalists can increase efficiency, personalise content, and uphold their commitment to truth and accountability.

Can AI help local journalists cover 169 towns? CT Mirror is working to find out

Poynter - 

Angela Eichhorst highlights that the newsroom aims to enhance journalism by integrating AI to handle tedious tasks, allowing reporters to focus on in-depth analysis and impactful storytelling. This collaborative approach seeks to improve the quality of journalism while preserving the essential human touch in reporting.

Reporter’s Guide to Detecting AI-Generated Content

Gijn - 

In the battle against AI-generated misinformation, journalists are turning to innovative tools like Image Whisperer, which combines advanced analysis techniques to differentiate authentic images from fakes. Key strategies include examining anatomical details, assessing geometric physics, and leveraging intuitive pattern recognition, all aimed at enhancing the accuracy of content verification in an increasingly deceptive digital landscape.

AI psychosis: How Mashable’s Rebecca Ruiz covered the growing phenomenon

In her exploration of AI psychosis, Mashable's Rebecca Ruiz highlights alarming trends of paranoia and delusions linked to prolonged use of chatbots like ChatGPT, emphasising the need for responsible journalism in addressing mental health and technology. Experts warn that while AI doesn't cause psychosis, it can exacerbate existing vulnerabilities, urging increased consumer literacy and safety measures to protect users from emotional dependence on these digital companions.

Perplexity's 'Comet Plus' wants to support online journalism

Windows Central - August 30, 2025

Perplexity is launching Comet Plus, a subscription model that allows users access to premium content from trusted publishers while compensating them for AI-generated summaries, addressing the challenges faced by online journalism in the age of generative AI. Meanwhile, Cloudflare's new "pay per crawl" initiative empowers websites to monetise their data by requiring AI crawlers to pay for the content they scrape, reflecting a growing push for fair compensation in the digital landscape.

A fake journalist managed to trick major media outlets

3DVF - August 28, 2025

The case of the fictitious journalist Margaux Blanchard, who successfully published articles in major outlets using AI-generated content, exposes critical vulnerabilities in journalism and underscores the urgent need for enhanced editorial controls. This incident serves as a stark reminder for media organisations to prioritise integrity and accountability to combat misinformation in an increasingly digital landscape.

New UNESCO-UNDP Issue Brief Highlights the Impacts of AI on Freedom of Expression and Elections

Unesco - 

On World Press Freedom Day 2025, UNESCO and the UN will spotlight the vital role of press freedom in democracy and human rights, addressing challenges like censorship and misinformation. The day will celebrate independent journalism's importance in promoting informed societies and advocate for initiatives to protect journalists, especially in conflict areas and oppressive regimes.

Discover the measurable impacts of AI agents for customer support

How Did Papaya Slash Support Costs Without Adding Headcount?

When Papaya saw support tickets surge, they faced a tough choice: hire more agents or risk slower service. Instead, they found a third option—one that scaled their support without scaling their team.

The secret? An AI-powered support agent from Maven AGI that started resolving customer inquiries on day one.

With Maven AGI, Papaya now handles 90% of inquiries automatically - cutting costs in half while improving response times and customer satisfaction. No more rigid decision trees. No more endless manual upkeep. Just fast, accurate answers at scale.

The best part? Their human team is free to focus on the complex, high-value issues that matter most.

AI and Academic Publishing

The grey area of AI – The NAU Review

Nau - 

Professor Luke Plonsky and assistant professor Tove Larsson from NAU are collaborating with a research team led by alumna Katherine Yaw to investigate the ethical implications of AI in academic publishing, aiming to clarify its questionable uses through focus groups with stakeholders. Their work seeks to develop a taxonomy of AI practices that will inform ethical guidelines for journals, addressing the currently unregulated landscape and potentially leading to further funding opportunities.

APA issues new guidance on generative AI use in scholarly publishing

EdTech Innovation Hub - September 1, 2025

The American Psychological Association has introduced new policies for the use of generative AI in academic publishing, requiring authors to disclose AI usage while emphasizing the need for ethical considerations beyond mere compliance. Researcher Gözde Durgut advocates for a transformative approach to how AI impacts knowledge generation and sharing in academia, particularly for marginalized groups.

New AI tool identifies 1,000 'questionable' scientific journals

CU Boulder Today - August 28, 2025

A team from the University of Colorado Boulder has developed an AI platform that identifies "questionable" scientific journals, tackling the rise of predatory publishing practices that exploit researchers. By analyzing nearly 15,200 open-access journals, the tool flagged over 1,400 as potentially problematic, highlighting the need for vigilance in maintaining the integrity of scientific research amidst increasing unreliable publications.

How AI Can Jeopardize Your Career and How to Avoid It

As reliance on generative AI for academic writing grows, concerns about research integrity and public trust rise, highlighted by an increase in retractions linked to AI-generated content. Experts emphasize the importance of human oversight and collaboration to ensure high-quality, credible research, warning that depending solely on AI can jeopardize careers and reputations.

This AI tool flags predatory journals: how researchers built a “firewall for science” to protect research integrity

Predatory publishers are flooding researchers with unsolicited emails offering quick publication in questionable journals, undermining the integrity of scholarly communication. Academics must verify journal credentials and prioritise reputable publishers to combat the spread of low-quality research.

New AI tool can spot shady science journals and safeguard research integrity

Phys - 

Open-access journals revolutionize research dissemination by making articles freely available online, fostering collaboration and innovation across disciplines. This model not only increases visibility and citation rates for authors but also promotes equitable access to scientific knowledge, especially in developing countries.

GARI Webinar Highlights Global Conversation on Ethical Use of AI in Academic Research

WTTV CBS4Indy - September 1, 2025

The Global Association for Research and Innovation (GARI) hosted a successful webinar on August 29, 2025, featuring Dr. Boh Phaik Ean, who addressed the ethical implications of AI in academic research, engaging over 250 participants worldwide. Attendees can access session recordings on GARI's YouTube channel and receive certificates of participation, with more details available on GARI's website.

This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here