You wouldn’t let an unknown vendor record your executive meetings, so why trust just any AI?
Most AI notetakers offer convenience. Very few offer true security.
This free checklist from Fellow breaks down the key criteria CEOs, IT teams, and privacy-conscious leaders should consider before rolling out AI meeting tools across their org.
Was this newsletter forwarded to you? Sign up here
A recent report by digital market intelligence company Similarweb makes for grim reading for publishers. The headline number confirms what we are seeing on the ground - since the launch of Google AI Overviews, organic traffic to news publishers has dropped from its peak in mid-2024 of 2.3 billion visits to now under 1.7 billion. Meanwhile, the “zero-clicks share” - those that get all they need from the search results page- has jumped from 56% to 69% as of May 2025.
The silver lining, if we can call it that, is that news referrals from ChatGPT have increased by 25x - from just under a million in May 2024 to 25 million in 2025.
Of course, this GenAI traffic is currently just a drop in the bucket; however, optimists argue that it is better-qualified traffic seeking deeper engagement.
Similarweb also reports that not all GenAI traffic is distributed equally; clear winners are starting to emerge, including Reuters, The New York Post, and Business Insider.
Queue a stampede of startups trying to figure out how this happens and how to influence the results.
One snarky answer is to cut a licensing deal with OpenAI; the other snarky answer is to start bot farms for LLM grooming, as pioneered successfully by Russian propaganda outlets. Failing that, you can always try the litigation route as reported below, and file an EU antitrust complaint.
I am not convinced anyone yet has a legitimate answer to Generative Search Optimisation or GEO as it is becoming known. For a start, prompts are very different to search queries - far more nuanced and personalised. The old SEO tactics dont work if the ten blue links page doesn’t exist. And how do you tell the difference between a search index crawler, an AI web scraper bot and a bot masquerading as a human?(Cloudfare may have the answer - more on that next week).
My advice remains the same - beat the platforms at their own game by building in-house conversational answer engines, driving deeper engagement within the articles themselves. I would far rather “talk to The Times”, trained on its archive and personality, than an anodyne, soulless, aggregated answer machine.
Mark Riley, CEO Mathison AI
Looking to build a bespoke, GenAI answer engine? Reply to this newsletter to learn more.
Hi
AI will be a substantial competitive advantage for those who master it. But most are still struggling with adoption because rolling out a chatbot or copilot isn’t enough to get people using it.
At Mathison AI, we are helping CEOs like you rapidly discover and prototype high-impact AI use cases tailored to your business.
We’re currently working with global and local enterprises to identify areas where AI can drive real operational value — from automation and cost savings to entirely new product ideas.
If you’re exploring AI and want a clear, low-risk way to get started, I’d love to share how we approach it through fast education sessions and hands-on prototyping.
AI and Journalism
ChatGPT referrals to news sites are growing, but not enough to offset search declines TechCrunch - July 2, 2025 A Similarweb report highlights a significant decline in organic search traffic to news publishers since Google’s AI Overviews launched, with traffic plummeting from over 2.3 billion to under 1.7 billion visits. While ChatGPT referrals surged from under 1 million to over 25 million, this growth has not compensated for the losses, prompting publishers to explore alternative monetisation strategies through Google's new Offerwall service amidst widespread layoffs and shutdowns in the industry. |
Exclusive: Google's AI Overviews hit by EU antitrust complaint from independent publishers Reuters - An independent publishers' group has accused Google of leveraging its dominance in the online search market to prioritize its own services over smaller publishers, thereby limiting their visibility and revenue potential. They are advocating for regulatory scrutiny to foster a fairer digital landscape that supports competition and empowers independent voices in the publishing industry. |
Cloudflare Blocks AI Bots from Scraping Web Content Without Permission Cjr - Cloudflare has announced it will block websites that promote hate speech and violence, reflecting its commitment to fostering a safer online environment. This decision ignites ongoing discussions about the balance between free speech and the responsibility of tech companies in moderating harmful content. |
This is what happened when I asked journalism students to keep an ‘AI diary’ Online Journalism Blog - July 8, 2025 A recent initiative at Birmingham City University introduced an AI diary for journalism students, enhancing transparency, critical thinking, and engagement with literature while fostering more profound reflections on their creative processes. As students navigated their relationship with AI tools, they developed a more nuanced understanding of the role of technology in journalism, prompting essential discussions about creativity, trust, and the future of the field. |
Journalism's AI Evolution: Gannett Taboola.com - July 7, 2025 At the Cannes Lions Festival, industry leaders Adam Singolda, Mike Reed, and Christian Broughton discussed the transformative role of generative AI in journalism, highlighting its potential to enhance storytelling and newsroom efficiency while maintaining trust through original reporting. Their panel introduced DeeperDive, an AI-powered answer engine that integrates with publisher platforms, emphasizing a future where technology empowers journalists rather than replaces them. |
WPFD 2025: Exploring the future of journalism in an AI-Driven World Unesco - On World Press Freedom Day 2025, UNESCO highlights the urgent need for ethical guidelines in journalism as AI technologies reshape news production. The organization calls for transparency and collaboration among stakeholders to ensure that AI enhances journalistic integrity while equipping journalists with the necessary skills to navigate these challenges. |
Dietmar Schantin credit: The Drum
IFMS Media’s Dietmar Schantin on AI buddies and the future of media The Drum - July 3, 2025 In 2025, media companies are prioritising audience engagement and retention, leveraging AI for personalisation and innovative content delivery while embracing new revenue streams like newsletters, live events, and community experiences. As traditional advertising wanes, a focus on trust and authenticity will be key in building loyalty among younger audiences, redefining the future of media beyond conventional boundaries. |
Where do UK ‘decision makers’ get their news? Publishers, social media and AI
Press Gazette - July 3, 2025
A recent Portland survey reveals that 47% of UK adults use AI tools like ChatGPT for news, with usage spiking to 81% among decision makers. However, trust in these tools is low, as only 4% consider them trustworthy, compared to 44% who trust traditional news websites.
Law360 mandates reporters use AI “bias” detection on all stories Law360 has introduced a groundbreaking policy requiring all stories to undergo AI-driven bias detection before publication, aiming to enhance fairness and integrity in legal journalism. This initiative highlights the increasing reliance on artificial intelligence by media organizations to elevate editorial standards and combat bias in news coverage. |
Axios Event: Media leaders share how AI is transforming the industry Axios - At the Cannes International Festival of Creativity, publishers are rethinking their strategies in light of rapid AI advancements, focusing on innovation to enhance content creation and audience engagement while staying true to traditional values. This evolving landscape highlights the need for partnerships and adaptability to remain competitive and relevant in the industry. |
AI and Academic Publishing
The Use of Generative Artificial Intelligence (AI) in Academic Research: A Review of the Consensus App Cureus - July 4, 2025 Channels are specialised publishing platforms that enable organisations to share their research and clinical experiences through peer-reviewed content, managed by dedicated editors to maintain academic standards. The Academic Channels Guide outlines the structure, editor responsibilities, and submission process, fostering collaboration and enhancing visibility in the academic community. |
Navigating the Shifting Tides of Scholarly Publishing: Investment Opportunities in a Post-Funding-Cut Era Ainvest - July 3, 2025 The scholarly publishing landscape faces upheaval with proposed NIH budget cuts potentially reducing publication output, impacting major players like Elsevier and Springer Nature. As AI reshapes research workflows and APCs rise, investors should focus on companies with hybrid revenue models, such as Wolters Kluwer and Taylor & Francis Group, to navigate these challenges effectively. |
Massive study detects AI fingerprints in millions of scientific papers Phys - Explore the rise of AI in content creation, where tools like language models and image generators are revolutionizing how we produce art, music, and writing. This evolution prompts critical reflections on originality, authorship, and the ethical implications of blending human creativity with machine-generated work. |
Transformations in academic work and faculty perceptions of artificial intelligence in higher education Frontiers - July 7, 2025 This narrative review examines the transformative impact of artificial intelligence (AI) on higher education, focusing on faculty perceptions, adoption barriers, and ethical considerations. It emphasizes the need for comprehensive training and institutional support to foster responsible AI use while addressing concerns about academic integrity and equity. |
Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary? Researchers are controversially using hidden AI prompts to influence peer review outcomes in academic publishing, raising ethical concerns about the integrity and objectivity of the process. While some believe this could help elevate underrepresented voices, critics warn it may compromise the trustworthiness of scholarly work. |
Paperpal Champions Accessibility and Inclusivity in Academic Writing Prnewswire - July 1, 2025 Paperpal, the AI writing assistant from Cactus Communications, has become one of the first academic platforms to comply with the European Accessibility Act and WCAG 2.1 Level AA standards, enhancing inclusivity for over 2 million academics. Additionally, CACTUS has announced a partnership aimed at improving the accessibility of research content, ensuring valuable insights reach a broader audience and fostering a more informed society. |
Springer Nature Book Contains Fabricated Citations WebProNews - July 7, 2025 Springer Nature is facing a credibility crisis after an investigation revealed that nearly 40% of the citations in the book "Mastering Machine Learning" were fabricated or misattributed, raising serious concerns about editorial oversight. With nearly 3,000 articles retracted in 2024 and the rise of AI-generated content leading to further issues, the company must enhance its quality control measures to restore trust in academic publishing. |
Hidden AI prompts in academic papers spark concern about research integrity The Japan Times - July 4, 2025 Researchers, including those from Waseda University, have been caught embedding hidden prompts in academic papers to manipulate AI-assisted reviewers into giving favorable feedback, highlighting serious integrity concerns within the peer review process. This unethical practice was found in 17 papers across 14 universities and underscores the urgent need for reform in academic publishing standards. |
Safeguarding Scientific Integrity in the Age of AI Psychology Today - March 21, 2025 The article explores the erosion of public trust in science due to perceived selective data presentation, emphasising the necessity for scientific integrity through transparency and the acknowledgement of contradictory evidence. It highlights the importance of rigorous analysis in both scientific inquiry and AI systems to maintain credibility and prevent the misuse of information for political purposes. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy