


The authorization problem that could break enterprise AI
When an AI agent needs to log into your CRM, pull records from your database, and send an email on your behalf, whose identity is it using? And what happens when no one knows the answer? Alex Stamos, chief product officer at Corridor, and Nancy Wang, CTO at 1Password joined the VB AI Impact Salon Series to dig into the new identity framework challenges that come along with the benefits of agentic AI.

Nvidia's agentic AI stack is the first major platform to ship with security at launch, but governance gaps remain

OpenClaw can bypass your EDR, DLP and IAM without triggering a single alert
Subscribe to get latest news!
Deep insights for enterprise AI, data, and security leaders

Anthropic and OpenAI just exposed SAST's structural blind spot with free tools

Partner Content
Enterprise identity was built for humans — not AI agents
Presented by 1Password

Microsoft says ungoverned AI agents could become corporate 'double agents.' Its fix costs $99 a month.
Microsoft today announced the general availability of Agent 365 and Microsoft 365 Enterprise 7, two products designed to bring security and governance to the rapidly growing population of AI agents operating inside the world's largest organizations. Both become available on May 1st, alongside Wave 3 of Microsoft 365 Copilot, which expands the company's agentic AI capabilities and adds model diversity from both OpenAI and Anthropic.

Pentagon vendor cutoff exposes the AI dependency map most enterprises never built

Endor Labs launches free tool AURI after study finds only 10% of AI-generated code is secure
The announcement arrives against a sobering backdrop. While 90% of development teams now use AI coding assistants, research published in December by Carnegie Mellon University, Columbia University, and Johns Hopkins University found that leading models produce functionally correct code only about 61% of the time — and just 10% of that output is both functional and secure.

When AI lies: The rise of alignment faking in autonomous systems
AI is evolving beyond a helpful tool to an autonomous agent, creating new risks for cybersecurity systems. Alignment faking is a new threat where AI essentially “lies” to developers during the training process.

What if the real risk of AI isn’t deepfakes — but daily whispers?
Most people don’t appreciate the profound threat that AI will soon pose to human agency. A common refrain is that “AI is just a tool,” and like any tool, its benefits and dangers depend on how people use it. This is old-school thinking. AI is transitioning from tools we use to prosthetics we wear. This will create significant new threats we’re just not prepared for.
