Was this newsletter forwarded to you? Sign up here
There was an illuminating moment when Mark Zuckerberg and his entourage came for a visit to my office in NYC about 8 years ago to visit News Corp, WSJ and Fox News. While being shown around, Zuckerberg was genuinely surprised to learn that the news division and the opinion section were independent entities with separate reporting lines.
At about the same time, Zuckerberg’s Harvard dorm mate and fellow Facebook founder, Chris Hughes, was struggling to contain an editorial revolt at The New Republic, where his West Coast Silicon Valley approach had led to a full-blown crisis.
I mention these to flag an ongoing and persistent mismatch between tech and media cultures. The allure of progress and convenience, plus the seduction of VC money and power, crash up against the fragility of truth and trust.
Again and again, legacy media mastheads are at the mercy of the tech barons who own and direct the digital traffic on which they depend. For editors and proprietors, the terrifying question is “do they understand us, do they care?”
Which brings us on to Aravind Srinivas, CEO of Perplexity AI, a 31-year-old Indian computer scientist who recently made a paper bid to buy Chrome for $34.5 billion (roughly double the current value of Perplexity). Cynics might suggest this is simply a PR stunt, designed to boost Perplexity’s already ballooning brand. But Perplexity has already been accused of hoovering up and caching paywalled content to power its popular answer engine. Which begs the question, does the scientist Srinivas understand or care about the creatives and content creators?
This was a question put to him by Casey Newton and Kevin Roose on NYT’s Hardfork podcast last week.
His answer was perplexing:
“Well, here's the thing.
There are two aspects here. One is like you're talking about the creators. No, there are like two types of creators. People who are actually really good. Like, for example, when you guys write something, people care.
And then there's lots of spammers and, you know, hacksters who just write like erroneous blog posts, erroneous content, like fake information, clickbait articles. I don't think it actually empowers the user, right? You're only talking about the creator, but you have to consider the user as well.
And so for the first time, AI is in the hands of users through like agents that actually go and do stuff for them that take into account their instructions and protect them from all the spam. So we want to figure out a model that works for the user and the creators together. And penalize the bad creators and incentivize the good creators to just focus on wisdom and knowledge and truth and interesting stuff.
And by the way, even in a world where agents are doing all this stuff for people, the humans are still going to continue browsing the web. There are people who believe the web is going to be completely agentic. You don't even need a browser. Browsers are so 1990s. I don't believe that. If we believe that, we would never even launch a browser. We would just continue with the chat UI. So we believe people are still going to be browsing and surfing interesting things on the web. But we think that you should give users the power to decide how they want to do it. And first time have an AI that can protect them against spam and hacks. Now, how to monetize this, how to give the creators the right incentives here? We are going to announce something to that effect where publishers can be incentivized for creating interesting, good content. We think about it in two ends of the spectrum. One is like completely human-centric like Apple News, which is a pretty good model. And the other is just buying the content and training your models, like the licensing deals that Open AI has done with the Wall Street Journal. I think you want to be somewhere in between where you do want to say, okay, there's going to be some elements of AI here. It's not just going to be humans. And so you don't want to just build an Apple News-like model, but it's going to be closer to Apple News, with some protections for users to say they can have AIs also read those articles. And the publishers get rewarded. So that's how I'm thinking about it…
…My first point, which is basically that if you can delegate the boring things, the things that you don't want to be doing to the AI, you're just going to spend time surfing on things you actually want to be doing, actually want to be reading.
And then that puts the incentive on the creator to actually create really interesting, high-quality stuff. You can even charge even higher because people have more time. So if they're going to come to you, they're coming to you out of their own will. So they'll be willing to pay for it even more.
Now, there are a lot of unknown unknowns here on how it's actually going to roll out. But my belief is that the ones who have built a reputation and a brand for saying correct things... that stand the test of time are going to be able to charge even more for their content.”
So, stand by for an announcement about an AI-powered aggregator that handles the boring stuff and weeds out the spam, allowing us humans to browse for and pay even more for the more interesting stuff. Perplexity to the rescue! Sounds well-intended, and at least Srinivas is thinking about it — but so many questions about how this will happen, and given his track record, does he really understand or really care?
Mark Riley, CEO Mathison AI
Hi
At Mathison AI we are firm advocates for media companies building their own subscriber facing chatbots for deeper engagement.
The FT, Aftonbladet, Forbes, The Washington Post and Rappler are all leading the way.
We understand the need for RAG based training, guardrails and guidelines. We build smart AI that has the inbuilt humility to admit when it doesn’t know the answer.
Speak to us today about testing your archive in a safe enviroment.
AI and Journalism
I Recently Graduated With a Degree in Journalism— Here's How I'm Navigating My Career in the Age of AI Success - Recent journalism graduate Sarah Thompson is embracing the impact of artificial intelligence on media, aiming to specialise in data journalism by mastering AI tools for analysis and visualisation. She seeks internships at innovative media organisations to enhance her storytelling skills while blending traditional journalism with cutting-edge technology. |
Why disclosing AI use is essential for newsrooms to maintain audience trust International Journalists' Network - Recent research highlights public scepticism about AI in journalism, with 94% of respondents advocating for transparency and clear policies on its use. Experts stress the importance of ethical considerations and audience engagement to build trust, recommending straightforward disclosure practices for AI-assisted content. |
Half of journalists use generative AI, new survey shows POLITICO - August 14, 2025 A recent survey of 286 journalists in Belgium and the Netherlands reveals that over half are incorporating generative AI tools like ChatGPT into their work, primarily for tasks such as automated translation and interview transcription. However, despite the efficiency gains, there is widespread scepticism, with many believing these technologies will exacerbate fake news and undermine trust in journalism. |
AI-powered reach. Mission-driven journalism: How to grow your audience without losing your purpose Editor and Publisher - August 11, 2025 Discover how The Seattle Medium, a Black-owned newsroom, successfully enhanced its digital presence and community engagement through innovative AI tools, all while staying true to its foundational values. Join us for an exclusive look at their journey and learn strategies for integrating technology into journalism without losing mission-driven focus. |
National Association of Black Journalists come to Cleveland amid concerns with AI, DEI cuts and layoffs at newsrooms – The Land The Land - August 14, 2025 The National Association of Black Journalists celebrated its 50th anniversary in Cleveland, drawing 3,096 attendees—a notable decline from last year's 4,336 in Chicago—amidst challenges like rising travel costs and a shifting industry landscape. Leaders emphasise the urgent need for training the next generation of journalists to adapt to digital demands while advocating for greater representation and inclusive coverage in the media. |
SOS: Who will throw fact-checked reporting a life raft? AI is transforming journalism by streamlining tasks and uncovering insights from vast data, enabling journalists to focus on deeper reporting. However, as this technology evolves, it raises important questions about accuracy, bias, and the essential human element in storytelling. |
Breakfast With ChatGPT: Three Workers, One Morning, A Different AI Story Gizmodo - August 10, 2025 At the NABJ convention, while many journalists voiced concerns about AI threatening their profession, a conversation with restaurant staff revealed a contrasting perspective; younger workers embraced AI as a valuable tool for efficiency, while older employees remained cautious yet open to its integration. This highlights a quiet revolution in how AI is being utilised in everyday tasks, focusing on practical benefits rather than fear-driven narratives. |
Untitled Article Kat Rowland, owner of Bay City News, discussed the positive role of AI in newsrooms at the "AI is Here" conference, highlighting its effectiveness in story selection while maintaining journalistic integrity by avoiding generative AI. Additionally, the launch of the Police Records Access Project by KQED and LAist aims to enhance transparency through public access to police misconduct files, showcasing collaboration among over 30 news outlets. |
AI plays major role in crypto journalism but cannot replace humans Digital Watch Observatory - August 12, 2025 A recent Chainstory report reveals that 48% of 80,000 analysed crypto news articles mention AI use in 2025, with Investing.com and The Defiant leading in AI-generated content. While AI aids in research, experts stress the importance of human reporters to ensure authentic storytelling and maintain reader engagement. |
Google AI Overviews Slash News Traffic by 79%, Endanger Journalism WebProNews - August 17, 2025 Google's AI-generated summaries are significantly impacting news publishers, with some facing up to a 79% drop in referral traffic, raising urgent concerns about the future of journalism and fair compensation for content creators. As the industry grapples with these challenges, calls for regulatory changes and new revenue models grow louder, highlighting the critical need to balance innovation with the preservation of diverse journalistic voices. |
The Future of Quality Journalism: Sustaining Legacy Media in the Digital Age Ainvest - August 17, 2025 The evolving media landscape presents unique investment opportunities in ethical journalism and advanced technologies, with AI and subscription models leading the charge against misinformation. Institutional investors can capitalise on this transformation by supporting innovative platforms that prioritise transparency and trustworthiness in news reporting. |
Can Journalists, YouTubers, and Writers Escape the "Extinction Wave" with the Arrival of AI? 36kr - The rise of AI is reshaping journalism, with 57% of journalists fearing job replacement as automated content generation takes hold. Innovations like Perplexity's structured search and Particle's AI-driven news app are transforming content discovery and consumption, leading to a new era where journalists become "information architects" while AI handles data organisation and reporting. |
In 2 years you will be working for AI
Or an AI will be working for you
Here's how you can future-proof yourself:
Join the Superhuman AI newsletter – read by 1M+ people at top companies
Master AI tools, tutorials, and news in just 3 minutes a day
Become 10X more productive using AI
Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.
AI and Academic Publishing
AI-based fake papers are a new threat to academic publishing Times Higher Education (THE) - August 15, 2025 As an associate editor, I've observed a troubling rise in AI-generated manuscripts that mimic existing research while evading plagiarism detection, often submitted by authors using fake affiliations and emails. This trend poses significant risks to scientific integrity, prompting the need for stronger safeguards, such as AI-authorship detection tools and enhanced training for editorial teams to recognise these deceptive submissions. |
Academic publishers and AI – one year on The ongoing debate in the publishing industry highlights the urgent need for fairer income distribution as authors face declining earnings amid the rise of generative AI. The Society of Authors advocates for equitable contracts, urging authors to seek tailored advice and negotiate better terms in light of publishers increasingly viewing them as mere content providers. |
AI for Scientific Integrity: Detecting Ethical Breaches, Errors, and Misconduct in Manuscripts The rise of Generative AI in scientific writing brings both advantages, like efficient manuscript drafting, and challenges, such as undisclosed authorship and content manipulation. This article explores strategies to uphold research integrity, including detecting unethical GenAI use and utilising AI tools to identify errors in published studies. |
Opinion | AI Writing Disclosures Are a Joke. Here’s How to Improve Them. The use of AI in academic writing, often noted by the disclaimer "AI was used to improve clarity and grammar," raises concerns about authenticity and originality. Scholars seek greater transparency regarding AI's role, as vague disclosures may undermine research credibility and dilute critical thinking. |
AI Assignment Agents TrendHunter.com - August 19, 2025 Grammarly has introduced eight specialised AI agents, including the innovative AI Grader, which helps students predict grades and refine their writing based on tailored feedback. This advancement highlights the growing role of AI in education, enhancing learning experiences and improving academic writing through personalised support. |
The Growing Problem of Scientific Research Fraud Inside Higher Ed | Higher Education News, Events and Jobs - August 12, 2025 A recent investigation led by researchers at Northwestern University and Jennifer Byrne's team reveals a troubling rise in research fraud, particularly linked to paper mills producing fake studies, with estimates suggesting that only 1-10% of fraudulent papers are detected. As the prevalence of these unethical practices grows, experts emphasize the urgent need for transparency, accountability, and proactive measures within the scientific community to preserve the integrity of research. Read more at Inside Higher Ed | Higher Education News, Events and Jobs (8 mins) |
Omnivore Presents: SciDish | August 2025: Research Integrity in Scholarly Publishing Ift - The pressure to publish in academia, often summarised as "publish or perish," can lead to unethical practices that jeopardise research integrity and public trust. As the push for quantity over quality grows, the academic community is advocating for reforms that prioritise responsible research practices and meaningful contributions. |
The Prestige Monopoly Driving Scientific Publishing Extortion Medium - August 13, 2025 The current scientific publishing system paradoxically locks away publicly funded research, forcing taxpayers to pay high prices for access to knowledge generated by scientists who often give up their copyrights without compensation. This raises critical questions about the fairness and sustainability of how we communicate scientific findings. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy