4 episodes taggedApproximate match across all podcasts
Home/Tags/SECURE CORE INFRASTRUCTURE

SECURE CORE INFRASTRUCTURE

All podcast episode summaries matching SECURE CORE INFRASTRUCTURE โ€” aggregated across every podcast we track.

4 episodes ยท Page 1/1

โ€œI think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of, at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead.โ€

โ€” David Sacks
Macro Pods
APR 10, 2026All-In Podcast, LLC
  • โ€ข

    Anthropic blocks Mythos release over security concerns

    โ€œThe company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the Internet. So, you know, what I like about this is they didn't need government to hold their hand on this.โ€

    โ€” Brad Gerstner
  • โ€ข

    Project Glasswing creates a cyber defense coalition

    โ€œLet's spend a hundred days using advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them. Now what I think this represents, Jason, is a threshold that we're crossing. Mythos and Spud, which is going to be out from OpenAI any day now, represent the beginning of what I would call AGI models.โ€

    โ€” Brad Gerstner
  • โ€ข

    Anthropic achieves historic thirty billion revenue ramp

    โ€œI think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of, at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead.โ€

    โ€” David Sacks
  • โ€ข

    AGI models require sandboxing before public release

    โ€œThese are models with massive step function improvements and intelligence, and they're just too smart to be released immediately. You know, and by the way, there was nothing that said that every time you finish a model you gotta immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, in order to move away from that regime.โ€

    โ€” Brad Gerstner
  • โ€ข

    OpenClaw faces threats from centralized AI dominance

    โ€œIt shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're gonna sandbox these things.โ€

    โ€” Brad Gerstner
Good interview shows
APR 10, 2026All-In Podcast, LLC
  • โ€ข

    Anthropic blocks Mythos release over security concerns

    โ€œThe company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the Internet. So, you know, what I like about this is they didn't need government to hold their hand on this.โ€

    โ€” Brad Gerstner
  • โ€ข

    Project Glasswing creates a cyber defense coalition

    โ€œLet's spend a hundred days using advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them. Now what I think this represents, Jason, is a threshold that we're crossing. Mythos and Spud, which is going to be out from OpenAI any day now, represent the beginning of what I would call AGI models.โ€

    โ€” Brad Gerstner
  • โ€ข

    Anthropic achieves historic thirty billion revenue ramp

    โ€œI think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of, at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead.โ€

    โ€” David Sacks
  • โ€ข

    AGI models require sandboxing before public release

    โ€œThese are models with massive step function improvements and intelligence, and they're just too smart to be released immediately. You know, and by the way, there was nothing that said that every time you finish a model you gotta immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, in order to move away from that regime.โ€

    โ€” Brad Gerstner
  • โ€ข

    OpenClaw faces threats from centralized AI dominance

    โ€œIt shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're gonna sandbox these things.โ€

    โ€” Brad Gerstner
Macro Pods
APR 10, 2026All-In Podcast, LLC
  • โ€ข

    Anthropic blocks Mythos release over security concerns

    โ€œThe company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the Internet. So, you know, what I like about this is they didn't need government to hold their hand on this.โ€

    โ€” Brad Gerstner
  • โ€ข

    Project Glasswing creates a cyber defense coalition

    โ€œLet's spend a hundred days using advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them. Now what I think this represents, Jason, is a threshold that we're crossing. Mythos and Spud, which is going to be out from OpenAI any day now, represent the beginning of what I would call AGI models.โ€

    โ€” Brad Gerstner
  • โ€ข

    Anthropic achieves historic thirty billion revenue ramp

    โ€œI think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of, at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead.โ€

    โ€” David Sacks
  • โ€ข

    AGI models require sandboxing before public release

    โ€œThese are models with massive step function improvements and intelligence, and they're just too smart to be released immediately. You know, and by the way, there was nothing that said that every time you finish a model you gotta immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, in order to move away from that regime.โ€

    โ€” Brad Gerstner
  • โ€ข

    OpenClaw faces threats from centralized AI dominance

    โ€œIt shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're gonna sandbox these things.โ€

    โ€” Brad Gerstner
Good interview shows
APR 10, 2026All-In Podcast, LLC
  • โ€ข

    Anthropic blocks Mythos release over security concerns

    โ€œThe company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the Internet. So, you know, what I like about this is they didn't need government to hold their hand on this.โ€

    โ€” Brad Gerstner
  • โ€ข

    Project Glasswing creates a cyber defense coalition

    โ€œLet's spend a hundred days using advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them. Now what I think this represents, Jason, is a threshold that we're crossing. Mythos and Spud, which is going to be out from OpenAI any day now, represent the beginning of what I would call AGI models.โ€

    โ€” Brad Gerstner
  • โ€ข

    Anthropic achieves historic thirty billion revenue ramp

    โ€œI think Anthropic has proven that it's very good at two things. One is product releases. The second is scaring people. And we've seen a pattern in their previous releases of, at the same time, they roll out a new model or new model card, something like that. They also roll out some study showing really the worst possible implication of where the technology could lead.โ€

    โ€” David Sacks
  • โ€ข

    AGI models require sandboxing before public release

    โ€œThese are models with massive step function improvements and intelligence, and they're just too smart to be released immediately. You know, and by the way, there was nothing that said that every time you finish a model you gotta immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, in order to move away from that regime.โ€

    โ€” Brad Gerstner
  • โ€ข

    OpenClaw faces threats from centralized AI dominance

    โ€œIt shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're gonna sandbox these things.โ€

    โ€” Brad Gerstner

Stay in the Loop

Free summaries of top podcasts. More signal, less noise.