


85% of enterprises are running AI agents. Only 5% trust them enough to ship.

Vercel breach exposes the OAuth gap most security teams cannot detect, scope or contain

Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it
Subscribe to get latest news!
Deep insights for enterprise AI, data, and security leaders

Adversaries hijacked AI security tools at 90+ organizations. The next wave has write access to the firewall

Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds

Microsoft patched a Copilot Studio prompt injection. The data exfiltrated anyway

Frontier models are failing one in three production attempts — and getting harder to audit

43% of AI-generated code changes need debugging in production, survey finds
The software industry is racing to write code with artificial intelligence. It is struggling, badly, to make sure that code holds up once it ships.

Five signs data drift is already undermining your security models
Data drift happens when the statistical properties of a machine learning (ML) model's input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today's sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems.

Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot
For the last 18 months, the CISO playbook for generative AI has been relatively simple: Control the browser.
