Was this newsletter forwarded to you? Sign up here
The prize for the most entertaining, vociferous and anti-AI longread so far this year (and I have read most of them) goes to 404 Media for their eviscerating piece, The Media's Pivot to AI Is Not Real and Not Going to Work. Jason Koebler executes a brutal takedown of any media exec looking to adopt AI:
“Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.”
He concludes: “the only thing that media companies can do in order to survive is to lean into their humanity, to teach their journalists how to do stories that cannot be done by AI, and to help young journalists learn the skills needed to do articles that weave together complicated concepts and, again, that focus on our shared human experience, in a way that AI cannot and will never be able to.”
While I agree with the sentiment, I don’t believe it is that binary — I think there is space to coexist with AI, once we have determined which tasks to delegate.
On the other side of the argument, Aron Solomon makes this case on Crunchbase: “Let The Bots Feast: Why Media Should Embrace The Great AI Scrape”
“Publishers should be working to collaborate with AI platforms, not wall them off. Embed metadata that tells LLMs who wrote the piece. Build deals that prioritize your bylines, link back to your coverage, and let your headlines flow through these models with attribution. Become the signal in a sea of synthetic noise”.
Aron makes the compelling argument for going all in and meeting the reader where they will be — in a bot-friendly AI universe.
While I remain somewhat conflicted and cautious about AI dangers, I also adhere to the maxim that “Big Tech Always Wins” — a lesson learned from King Canute. Navigating between these two articles and their opposite points of view is the knotty challenge for today’s beleaguered media execs.
“Generate a cartoon of King Canute telling the tide not to come in, where the tide represents AI”- ChatGPT 4o
Mark Riley, CEO, Mathison AI
Hi
Not sure where to start? We can help with a bespoke AI audit, benchmarking, use case triage and roadmap.
AI and Journalism
The Media's Pivot to AI Is Not Real and Not Going to Work 404 Media - July 14, 2025 The media industry faces a "traffic apocalypse" as reliance on AI for content generation threatens traditional journalism, with significant declines in visitor numbers and layoffs despite claims of increased efficiency. Tools like Dispatch and Assembly aim to enhance communication and information processing, but the focus must remain on preserving the human element in reporting. |
Let The Bots Feast: Why Media Should Embrace The Great AI Scrape The media industry is facing a critical challenge from AI advancements that threaten traditional journalism and publishing models, raising concerns about job security for writers and the quality of information produced. As publishers struggle to adapt, they must innovate to engage audiences and compete in an increasingly automated landscape. |
The struggle over AI in journalism is escalating Bloodinthemachine - July 18, 2025 A recent incident at Politico highlights the challenges journalists face with AI, as an unannounced tool generated erroneous content, raising concerns over editorial standards and job security. Amidst rising frustrations, a proposed New York bill seeks to ensure transparency and human oversight in AI usage within newsrooms, emphasising the need for ethical standards in technology adoption. |
AI is polluting truth in journalism. Here’s how to disrupt the misinformation feedback loop. A seasoned journalist with global experience in artificial intelligence reporting delves into the transformative potential and ethical dilemmas of AI across various sectors. Through interviews with key figures in tech, healthcare, and finance, they aim to illuminate the technology’s promises and challenges for society. |
What U.S. audiences want newsrooms to disclose about AI use The Journalist's Resource - July 14, 2025 In a recent survey presented at the IRE conference, Trusting News founder Joy Mayer revealed that 94% of news audiences demand transparency about AI's role in journalism, emphasising the need for tailored disclosures that integrate seamlessly into storytelling. As AI tools become more commonplace, the expectations for disclosure may shift, reflecting growing familiarity with these technologies. |
Journalists urged to embrace AI, carefully: SABEW panel highlights opportunities, risks and ethics in newsrooms The Reynolds Center - July 16, 2025 During a recent SABEW panel, experts Kylie Robison and Ben Welsh discussed the ethical implications of using AI in journalism, emphasising the need for careful verification of AI-generated content to maintain integrity. They highlighted the balance between leveraging AI for efficiency and upholding transparency and trust with audiences, cautioning against potential biases and privacy risks. |
How do journalists at KSHB 41 use artificial intelligence in our workflows? KSHB 41 Kansas City News - July 15, 2025 KSHB is leveraging AI to enhance journalism while prioritising transparency and editorial integrity, ensuring audiences are informed of AI's role in their stories. Meanwhile, Master's Transportation has relocated to a larger facility in Belton to consolidate operations and boost production capacity, emphasising collaboration and local job opportunities. |
Used properly, AI can help produce good journalism At the recent Mississippi Press Association convention, journalists discussed the evolving role of AI in news reporting, highlighting its efficiency in tasks like transcription while emphasising that it should complement, not replace, human journalists. In a related debate, Emmerich outperformed Fisackerly by focusing on transparency around utility costs and critiquing regulatory issues, raising concerns about how traditional billing practices could disproportionately burden consumers compared to more equitable models like Elon Musk's "behind the meter" approach. |
The rise of AI art is spurring a revival of analogue media The Economist - July 17, 2025 Discover the nostalgic charm of Torn Light Records in Chicago, where jazz fills the air and classic albums like Talking Heads' "Remain in Light" take center stage. Opened in 2024 after relocating from Cincinnati, this record shop is part of a vibrant community on Milwaukee Avenue, reflecting a cultural resurgence in the love for vinyl records and tangible media amidst a digital age. |
What journalists and the public think of journalism and technology The Journalist's Resource - July 17, 2025 A recent survey by the Center for News, Technology & Innovation reveals a notable gap in perceptions of journalism between journalists and the public, highlighting regional differences in understanding the distinction between journalism and news. As digital technologies transform the media landscape, both groups recognize the importance of education and transparency in building trust and adapting to evolving challenges in the industry. |
AI and Academic Publishing
Scientific publishing needs urgent reform to retain trust in research process The Guardian - July 20, 2025 The dysfunctions in scientific publishing, driven by excessive publication incentives and profit-driven publishers, are exacerbated by the rise of AI-generated content, which threatens to erode trust in science. To combat these issues, a comprehensive review of the academic publishing ecosystem is needed to prioritize quality, reform assessment practices, and promote equitable research dissemination, with findings expected to be shared in the autumn. |
Researchers embed secret messages to fool AI reviewers Qazinform.com - July 17, 2025 A new tactic called prompt injection is being used by some authors to covertly influence AI evaluations of their academic papers, embedding hidden instructions that highlight strengths and obscure weaknesses. This manipulation has been found in research from 44 institutions across 11 countries, sparking investigations amid debates about its overall impact on AI-assisted reviews. |
AI has a hidden bias against non-native English authors, study finds The Brighter Side of News - July 18, 2025 A recent study reveals that AI text detectors often misidentify non-native English writing, risking unjust rejections of valuable research and stifling academic diversity. The findings highlight the urgent need for improved algorithms that better accommodate linguistic variations to ensure fair treatment for all authors in the publishing process. |
Architectural features of the DeepInnovationAI dataset. - https://www.nature.com/articles/s41597-025-05518-3
A Global Dataset Mapping the AI Innovation from Academic Research to Industrial Patents Nature - July 18, 2025 Introducing DeepInnovationAI, an extensive dataset designed to enhance AI patent classification through advanced techniques like BERT and hypergraph modeling, aiming to bridge the gap between academic research and practical innovation. This comprehensive resource integrates over 3.5 million academic papers and patents, paving the way for more effective tracking of AI advancements and their transformative impact on technology. |
Copyright, AI, and the Future of Internet Search before the CJEU Verfassungsblog - July 17, 2025 The landmark case of Like Company v Google in the EU will address crucial copyright issues surrounding AI-generated content, particularly as it relates to the unauthorized use of press publishers' material by generative AI models like Google's Gemini. This legal battle could redefine the balance between protecting copyright holders' rights and fostering innovation in AI technology. |
AI, Error, and Academia: The Case of “Vegetative Electron Microscopy” Orfonline.org - July 15, 2025 The emergence of the nonsensical term "vegetative electron microscopy," stemming from a retracted paper and highlighting flaws in academic vetting, underscores the potential dangers of AI in research publishing. As pressure mounts to publish, the rise of hyperprolific researchers raises concerns about integrity, but could also lead to improved scrutiny and reform in academic practices. |
Why Academic Publishing Must Change Biopharma from Technology Networks - July 16, 2025 The academic publishing landscape is evolving towards greater accessibility and transparency, with calls for improved peer review processes, open access models, and recognition of diverse contributions in research. Key figures advocate for valuing negative results and implementing innovative publishing practices that reflect the complexities of modern science, aiming to enhance scientific progress and data interpretation. |
Scientist Sleuths Detect Significant Use of AI in Journal Papers Kobak et al. raise critical concerns about the reliability of large language models (LLMs), highlighting their tendency to fabricate references and generate misleading information, which poses risks in academic and professional settings. They advocate for enhanced verification mechanisms and caution users against blindly trusting LLM-generated content. |
It’s time we moved the generative AI conversation on HEPI - July 16, 2025 As generative AI becomes prevalent in higher education, with 92% of undergraduates using it, universities must transition from risk-averse policies to a comprehensive curriculum design that fosters AI literacy and ethical engagement. By integrating AI meaningfully into assessments and involving students in shaping educational practices, institutions can enhance learning experiences and prepare students for the evolving academic landscape. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy