4 episodes taggedApproximate match across all podcasts
Home/Tags/WATCH ANTHROPIC

WATCH ANTHROPIC

All podcast episode summaries matching WATCH ANTHROPIC β€” aggregated across every podcast we track.

4 episodes Β· Page 1/1

β€œNotably, they are not releasing this model to the public because they claim it is too dangerous to do that. Instead, they are giving access to a consortium of tech companies, including Cisco, Broadcoms, or makers of Internet infrastructure, as well as Microsoft, Apple, Amazon. Basically, every big tech company that is not OpenAI or Meta is getting access to this model, but not general access. Just access to do defensive cybersecurity testing, basically, to go out and harden their systems and their infrastructure and their software before the general public can get its hands on this model.”

β€” Kevin Roose
AI Podcast News
APR 10, 2026The New York Times
  • β€’

    Anthropic's Mythos model is too dangerous for release

    β€œNotably, they are not releasing this model to the public because they claim it is too dangerous to do that. Instead, they are giving access to a consortium of tech companies, including Cisco, Broadcoms, or makers of Internet infrastructure, as well as Microsoft, Apple, Amazon. Basically, every big tech company that is not OpenAI or Meta is getting access to this model, but not general access. Just access to do defensive cybersecurity testing, basically, to go out and harden their systems and their infrastructure and their software before the general public can get its hands on this model.”

    β€” Kevin Roose
  • β€’

    Mythos found a 27-year-old flaw in OpenBSD

    β€œOne of them was that this model apparently found a twenty seven year old security flaw in OpenBSD. OpenBSD is an open source operating system that runs on firewalls and routers. It is sort of like a critical security layer on the Internet, and it was designed specifically to be hard to hack. And this model, because of its advanced coding and reasoning capabilities, was able to find this bug that twenty seven years worth of professional security researchers had not been able to find.”

    β€” Kevin Roose
  • β€’

    AI can autonomously chain complex software exploits

    β€œAlex Stamos said, like, yes. This is a big deal. And he was hoping for a long time that we would see a consortium come together like this because of exactly what you just said, Kevin. The intelligence in in these machines and their ability to work autonomously are now great enough that they can chain together exploits that human beings either would never see, would take them a long time to see, or they would just never get to because we're we're limited in ways that these machines are not.”

    β€” Casey Newton
  • β€’

    Project Glasswing provides defensive access to tech giants

    β€œYou have a new model that you claim is the most powerful model in the world. So instead of selling it, you give a $100,000,000 of claud credits away to a consortium of companies that includes many of your competitors, which is what Anthropic is doing. That is not how I personally would market a spooky new model if I were in the business of marketing spooky new models.”

    β€” Kevin Roose
  • β€’

    Hard Fork Live returns to San Francisco June 10

    β€œOn June 10 in San Francisco, we are doing the second ever installment of Hard Fork Live. It's happening on June 10 in San Francisco at the Blue Shield of California Theater. Bigger venue than last year. Tickets will be on sale at nytimes.com/events. Not today, but next Friday, April 17.”

    β€” Kevin Roose
AI Podcast News
APR 10, 2026The New York Times
  • β€’

    Anthropic's Mythos model is too dangerous for release

    β€œNotably, they are not releasing this model to the public because they claim it is too dangerous to do that. Instead, they are giving access to a consortium of tech companies, including Cisco, Broadcoms, or makers of Internet infrastructure, as well as Microsoft, Apple, Amazon. Basically, every big tech company that is not OpenAI or Meta is getting access to this model, but not general access. Just access to do defensive cybersecurity testing, basically, to go out and harden their systems and their infrastructure and their software before the general public can get its hands on this model.”

    β€” Kevin Roose
  • β€’

    Mythos found a 27-year-old flaw in OpenBSD

    β€œOne of them was that this model apparently found a twenty seven year old security flaw in OpenBSD. OpenBSD is an open source operating system that runs on firewalls and routers. It is sort of like a critical security layer on the Internet, and it was designed specifically to be hard to hack. And this model, because of its advanced coding and reasoning capabilities, was able to find this bug that twenty seven years worth of professional security researchers had not been able to find.”

    β€” Kevin Roose
  • β€’

    AI can autonomously chain complex software exploits

    β€œAlex Stamos said, like, yes. This is a big deal. And he was hoping for a long time that we would see a consortium come together like this because of exactly what you just said, Kevin. The intelligence in in these machines and their ability to work autonomously are now great enough that they can chain together exploits that human beings either would never see, would take them a long time to see, or they would just never get to because we're we're limited in ways that these machines are not.”

    β€” Casey Newton
  • β€’

    Project Glasswing provides defensive access to tech giants

    β€œYou have a new model that you claim is the most powerful model in the world. So instead of selling it, you give a $100,000,000 of claud credits away to a consortium of companies that includes many of your competitors, which is what Anthropic is doing. That is not how I personally would market a spooky new model if I were in the business of marketing spooky new models.”

    β€” Kevin Roose
  • β€’

    Hard Fork Live returns to San Francisco June 10

    β€œOn June 10 in San Francisco, we are doing the second ever installment of Hard Fork Live. It's happening on June 10 in San Francisco at the Blue Shield of California Theater. Bigger venue than last year. Tickets will be on sale at nytimes.com/events. Not today, but next Friday, April 17.”

    β€” Kevin Roose
Startups & Tech
APR 2, 2026Harry Stebbings
  • β€’

    Anthropic hits $6BN monthly revenue milestone

    β€œI want to start with, you guessed it, Anthropic, unbelievable 28 day month of February, where they did 6 billion in revenue, which was more than Databricks has done in their entire lifetime. It was actually the accidental leak of Claude Mythos, essentially 3,000 unpublished assets leaked. It's a 10 trillion parameter model apparently, that is this next level step changing capabilities that they're not releasing because of how powerful it is.”

    β€” Harry Stebbings
  • β€’

    AI agents will accelerate data security leaks

    β€œThe faster we vibe code, the faster we ship, the more corners we cut in general on application level security. It happens. I mean, so many folks are accidentally uploading code to insecure GitHub's, to database, to super bases that are by default open. So this is this is accelerating our data, which is just open on the Internet. And you could say, but God, this shouldn't happen at the Anthropic level. And I'm sure someone will get will get scolded.”

    β€” Jason Lampkin
  • β€’

    OpenAI kills Sora to prioritize compute

    β€œYou're seeing the economists, the accountants have wandered into the room, and they said, we have a scarce resource here. Let's optimize it. Let's devote this compute to the people who can pay the most for it. You haven't lived till you've seen an 85% decline in an index. I think shooting Sora in the head is even more significant in terms of what it says about the strategic direction of the company.”

    β€” Rory O'Driscoll
  • β€’

    Anthropic blames human error for leaks

    β€œOn the cybersecurity leak, it was noteworthy that Anthropic, quote unquote, blamed human error. We may be at the stage where we throw the humans under the bus, not the AI anymore. Which I think at some level is pretty terrifying. But and you know exactly what happened. You often see this where you're about to do a big announcement. You have your content management system. You stage all the assets, be it their fed press release.”

    β€” Rory O'Driscoll
  • β€’

    Autonomous agents will drive massive token consumption

    β€œThe autonomous agents which I've been talking about how this is going to consume orders of magnitude more tokens and change our life. I'm excited to see more is coming and open claw was just this brief thing that woke us up to what Anthropic appears to be all in on. Truly autonomous agents running 24 seven, hopefully safely, hopefully not leaking all of our source code, but it's coming soon.”

    β€” Jason Lampkin
AI Podcast News
FEB 20, 2026The New York Times
  • β€’

    Pentagon Friction The U.S. Department of Defense is reportedly considering unprecedented regulatory or restrictive actions against Anthropic, marking a significant escalation in government oversight of domestic AI labs.

    β€œThis would be an unprecedented escalation against a U.S. company.”

    β€” Hard Fork Hosts
  • β€’

    Algorithmic Defamation Personal accounts of AI agents generating slanderous hallucinations highlight the growing legal and reputational risks inherent in deploying autonomous LLM systems.

  • β€’

    Regulatory Shift The potential move against a private U.S. AI company suggests a pivot toward a more aggressive national security posture regarding dual-use technology and private-sector innovation.

    β€œThis would be an unprecedented escalation against a U.S. company.”

    β€” Hard Fork Hosts

Stay in the Loop

Free summaries of top podcasts. More signal, less noise.