


Cloudflare’s new Dynamic Workers ditch containers to run AI agent code 100x faster

Partner Content
Liquid-cooled AI systems expose the limits of traditional storage architecture
Presented by Solidigm

Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)
Look, we've spent the last 18 months building production AI systems, and we'll tell you what keeps us up at night — and it's not whether the model can answer questions. That's table stakes now. What haunts us is the mental image of an agent autonomously approving a six-figure vendor contract at 2 a.m. because someone typo'd a config file.
Subscribe to get latest news!
Deep insights for enterprise AI, data, and security leaders

The three disciplines separating AI agent demos from real-world deployment

Why enterprises are replacing generic AI with tools that know their users

Mistral AI launches Forge to help companies build proprietary AI models, challenging cloud giants
The announcement caps a remarkably aggressive week for Mistral, which also released its Mistral Small 4 model, unveiled Leanstral — an open-source code agent for formal verification — and joined the newly formed Nvidia Nemotron Coalition as a co-developer of the coalition's first open frontier base model. Together, these moves paint the picture of a company that is no longer content to compete on model benchmarks alone and is instead racing to become the infrastructure backbone for organizations that want to own their AI rather than rent it.

Nvidia introduces Vera Rubin, a seven-chip AI platform with OpenAI, Anthropic and Meta on board
The message to the AI industry, and to investors, was unmistakable: Nvidia is not slowing down. The Vera Rubin platform claims up to 10x more inference throughput per watt and one-tenth the cost per token compared with the Blackwell systems that only recently began shipping. CEO Jensen Huang, speaking at the company's annual GTC conference, called it "a generational leap" that would kick off "the greatest infrastructure buildout in history." Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure will all offer the platform, and more than 80 manufacturing partners are building systems around it.

Nvidia's DGX Station is a desktop supercomputer that runs trillion-parameter AI models without the cloud
The announcement, made at the company's annual GTC conference in San Jose, lands at a moment when the AI industry is grappling with a fundamental tension: the most powerful models in the world require enormous data center infrastructure, but the developers and enterprises building on those models increasingly want to keep their data, their agents, and their intellectual property local. The DGX Station is Nvidia's answer — a six-figure machine that collapses the distance between AI's frontier and a single engineer's desk.

Rethinking AEO when software agents navigate the web on behalf of users
For more than two decades, digital businesses have relied on a simple assumption: When someone interacts with a website, that activity reflects a human making a conscious choice. Clicks are treated as signals of interest. Time on page is assumed to indicate engagement. Movement through a funnel is interpreted as intent. Entire growth strategies, marketing budgets, and product decisions have been built on this premise.

NanoClaw and Docker partner to make sandboxes the safest way for enterprises to deploy AI agents
