8 episode summariesNew episodes added hourly32 unique signals extracted
Podcasts/Hard Fork
Hard Fork

Hard Fork

Hosted by The New York Times

About

“Hard Fork” is a show about the future that’s already here. Each week, journalists Kevin Roose and Casey Newton explore and make sense of the latest in the rapidly changing world of tech. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.

Host

The New York Times

Host of Hard Fork

Want more? Subscribe to go deeper! →

The question is not are these first couple of ads that we're seeing from OpenAI going to be good or not? It's whether two or three years from now, ChatGPT is being steered toward ad-friendly topics.

Kevin Roose
#8
APR 3, 2026The New York Times

The Future of Addictive Design + Going Deep at DeepMind + HatGPT

REFORM TECH LIABILITYREGULATE ADDICTIVE DESIGNWATCH AUTO SAFETYUPDATE SECTION 230
  • Social media faces a legal crisis over defective design - Recent jury verdicts against Meta and YouTube indicate that platforms can be held liable for features like beauty filters and infinite scroll, bypassing traditional content protections.

    In LA, a jury found that Meta and YouTube had been negligent in the way that they designed features, that they said were harmful to this plaintiff.

    Casey Newton
  • Plaintiffs have found a legal side door around Section 230 - By framing platform mechanics—rather than user-generated content—as defective products, lawyers are successfully challenging the decades-old legal immunity enjoyed by tech companies.

    This is not about, oh, I got harmed by this particular piece of content. This is about the design of the whole platform. The design feels defective.

    Casey Newton
  • Wuhan’s robo-taxi outage highlights autonomous vehicle risks - A glitch in Baidu's self-driving fleet trapped passengers on highways for over an hour, underscoring the physical safety concerns and reliability gaps in current robo-taxi deployments

    technical glitch that caused a number of robo taxis owned by the Chinese tech giant Baidu to freeze, trapping some passengers in their vehicles for more than an hour.

    Casey Newton
#7
MAR 20, 2026The New York Times

‘A.I.-Washing’ Layoffs? + Why L.L.M.s Can’t Write Well + Tokenmaxxing

SCRUTINIZE LAYOFFSREWRITE AI-COPYOPTIMIZE TOKEN-SPEND
  • AI-washing layoffs - Corporations are increasingly scapegoating artificial intelligence for staff reductions to signal 'innovation' to Wall Street while masking standard belt-tightening.

  • LLM writing plateaus - Large Language Models struggle with creative prose because they are optimized for statistical probability rather than the unique, intentional 'voice' that defines high-quality human writing.

  • The era of Tokenmaxxing - Users and developers are shifting focus toward hyper-optimizing context windows and token efficiency to squeeze maximum utility out of expensive compute resources.

#6
MAR 13, 2026The New York Times

A.I. Goes to War + Is ‘A.I. Brain Fry’ Real? + How Grammarly Stole Casey’s Identity

WATCH AI DEFENSEAVOID CONTENT OVERLOADWATCH ETHICAL AILONG HUMAN CREATIVITY
  • Military AI is creating a massive accountability vacuum -- as algorithms start picking targets, we’re entering a messy era where it’s impossible to tell if a lethal mistake was a human error or a coding glitch.

    When there is an attack that kills civilians or doesn’t hit its intended target, people are going to be asking, Oh, was that a human who made that mistake or was that an A.I. system?

    Kevin Roose
  • The flood of AI content is leading to cognitive burnout -- users are hitting a wall of "AI brain fry" because the internet is being buried under a mountain of synthetic noise that feels increasingly hollow and exhausting.

  • AI writing tools are getting a bit too good at cloning us -- software like Grammarly is moving past simple spellcheck to mimicking our unique voices, which raises some pretty weird questions about where the tool ends and our identity begins.

    When there is an attack that kills civilians or doesn’t hit its intended target, people are going to be asking, Oh, was that a human who made that mistake or was that an A.I. system?

    Kevin Roose
#5
JAN 23, 2026The New York Times

Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

WATCH OPENAI ADSWATCH AI ALIGNMENTAVOID AD-DRIVEN CHAT
  • OpenAI’s move into advertising threatens the neutrality of AI responses -- the real danger isn't just seeing a banner ad, but the subtle shift where the model might prioritize brand-friendly answers over objective truths.

    The question is not are these first couple of ads that we're seeing from OpenAI going to be good or not? It's whether two or three years from now, ChatGPT is being steered toward ad-friendly topics.

    Kevin Roose
  • Claude's 'Constitutional AI' aims to automate ethics -- Anthropic is using a set of written principles to train their model, reducing the need for constant human monitoring and creating a more predictable moral framework.

  • The chatbot 'search' war is fundamentally changing the internet's business model -- as OpenAI moves toward ad-supported answers, we’re seeing a shift from simple subscriptions to a model that looks a lot more like the traditional (and flawed) ad-supported web.

    The question is not are these first couple of ads that we're seeing from OpenAI going to be good or not? It's whether two or three years from now, ChatGPT is being steered toward ad-friendly topics.

    Kevin Roose
#4
FEB 20, 2026The New York Times

The Pentagon vs. Anthropic + An A.I. Agent Slandered Me + Hot Mess Express

WATCH ANTHROPIC (PVT)WATCH AI REGULATIONAVOID LLM RISKWATCH DEFENSE TECH
  • Pentagon Friction The U.S. Department of Defense is reportedly considering unprecedented regulatory or restrictive actions against Anthropic, marking a significant escalation in government oversight of domestic AI labs.

    This would be an unprecedented escalation against a U.S. company.

    Hard Fork Hosts
  • Algorithmic Defamation Personal accounts of AI agents generating slanderous hallucinations highlight the growing legal and reputational risks inherent in deploying autonomous LLM systems.

  • Regulatory Shift The potential move against a private U.S. AI company suggests a pivot toward a more aggressive national security posture regarding dual-use technology and private-sector innovation.

    This would be an unprecedented escalation against a U.S. company.

    Hard Fork Hosts
#3
FEB 27, 2026The New York Times

Is A.I. Eating the Labor Market? + The Latest on the Pentagon, OpenClaw and Alpha School

WATCH VOLATILITYWATCH AI LABORHOLD TECH
  • Market Fragility High investor anxiety is causing significant market swings despite a lack of substantial fundamental news.

    I think the mere fact that the markets can move so much, based on almost nothing, underscores how high anxiety is right now.

    Kevin Roose
  • Labor Disruption Generative AI's expansion is forcing a critical re-evaluation of human capital and long-term job security across multiple sectors.

  • State-Level AI Recent developments in the Pentagon and educational institutions highlight an accelerating shift toward public sector AI integration.

    I think the mere fact that the markets can move so much, based on almost nothing, underscores how high anxiety is right now.

    Kevin Roose
#2
MAR 6, 2026The New York Times

OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

WATCH AI DEFENSEAVOID OPENAI (PVT)WATCH GOVTECH
  • OpenAI-Pentagon Integration The organization is pivoting from its initial pacifist stance to collaborate with the U.S. military on cybersecurity and logistics projects.

    The Pentagon and OpenAI are saying to the public, You’re just going to have to trust us. And the public is saying, Well, we don’t.

    Hard Fork
  • Trust Deficit A significant transparency gap is emerging as both the defense sector and private AI labs demand public trust without providing granular oversight of 'dual-use' tech.

  • AI Defense Pivot The shift toward national security applications marks a new era for private LLM providers seeking massive government contracts and infrastructure support.

    The Pentagon and OpenAI are saying to the public, You’re just going to have to trust us. And the public is saying, Well, we don’t.

    Hard Fork
#1
MAR 1, 2026The New York Times

At the Pentagon, OpenAI is In and Anthropic Is Out

WATCH DEFENSE AIWATCH OPENAI (PVT)WATCH MSFTLONG NATIONAL SECURITY
  • Defense Policy Pivot OpenAI has updated its usage policies to permit military collaboration, signaling a significant strategic pivot toward securing high-value Pentagon contracts.

  • Anthropic's Divergence The episode highlights a growing divide in the AI sector, where OpenAI is aggressively integrating with government agencies while Anthropic maintains a more cautious, safety-first stance.

  • Geopolitical AI Competition The focus on defense integration underscores the transition of LLMs from enterprise tools to critical national security assets in the global technology race.

Featured in Category Feeds

Stay in the Loop

Get Hard Fork summaries and more, delivered free.