Subscribe to get latest news!

Deep insights for enterprise AI, data, and security leaders

By submitting your email, you agree to our Terms and Privacy Notice.

Orchestration

View all

Infrastructure

View all
Intent-based testing

Intent-based chaos testing is designed for when AI behaves confidently — and wrongly

Here is a scenario that should concern every enterprise architect shipping autonomous AI systems right now: An observability agent is running in production. Its job is to detect infrastructure anomalies and trigger the appropriate response. Late one night, it flags an elevated anomaly score across a production cluster, 0.87, above its defined threshold of 0.75. The agent is within its permission boundaries. It has access to the rollback service. So it uses it.

Events

View all
Nuneybits Vector art of pastel watercolors 1980s computer termi ed08da9b-0305-45aa-9851-1d339920bf82

OpenAI turns its sold-out GPT-5.5 party into a monthlong Codex giveaway for 8,000 developers

"We had over 8,000 people express interest in just 24 hours, and while we wish our office was big enough to welcome everyone, we weren't able to make space for every person who applied," the company wrote in the email, which VentureBeat obtained. "As a small token of appreciation, we've 10x'ed your Codex rate limits until June 5th on your personal ChatGPT account."

Security

View all

Newsroom

View all
VB in Conversation

Securing AI at scale starts inside the code

VB talks with Cisco’s Anthony Grieco about why AI-generated code is breaking traditional security models, and forcing enforcement into the development loop.

VB in Conversation

Identity is the next breaking point in AI

AI agents don’t log in, follow rules, or behave like users. Cisco’s Matt Caulfield explains how they’re breaking the assumptions behind modern identity and access control.

Technology

View all
Nuneybits Vector art of burnt-orange moonlit sleeper dissolving 71200d84-78a7-48eb-890e-fb59ef136db1

Anthropic introduces "dreaming," a system that lets AI agents learn from their own mistakes

The company also moved two previously experimental features — outcomes and multi-agent orchestration — from research preview into public beta, making them broadly available to developers building on the Claude platform. Together, the three features address what Anthropic says are the hardest problems in running AI agents at scale: keeping them accurate, helping them learn, and preventing them from becoming bottlenecks on complex, multi-step work.

More