Leon Yen

A novel memory architecture could solve AI’s long-horizon blind spot

GAM takes aim at “context rot”: A dual-agent memory architecture that outperforms long-context LLMs

For all their superhuman power, today’s AI models suffer from a surprisingly human flaw: They forget. Give an AI assistant a sprawling conversation, a multi-step reasoning task or a project spanning days, and it will eventually lose the thread. Engineers refer to this phenomenon as “context rot,” and it has quietly become one of the most significant obstacles to building AI agents that can function reliably in the real world.

Leon Yen
(L-R) Ashok Srivastava, SVP and chief data officer at Intuit, Hilary Packer, EVP and CTO at American Express, Matt Marshall, VentureBeat CEO and editor-in-chief speak during VB Transform in SF on June 25. Photo: Michael O'Donnell Photography

Skip the AI 'bake-off' and build autonomous agents: Lessons from Intuit and Amex

As generative AI matures, enterprises are shifting from experimentation to implementation—moving beyond chatbots and copilots into the realm of intelligent, autonomous agents. In a conversation with VentureBeat’s Matt Marshall, Ashok Srivastava, SVP and Chief Data Officer at Intuit, and Hilary Packer, EVP and CTO at American Express at VB Transform, detailed how their companies are embracing agentic AI to transform customer experiences, internal workflows and core business operations.

Leon Yen
Photo: Michael O'Donnell Photography

The new AI infrastructure reality: Bring compute to data, not data to compute

As AI transforms enterprise operations across diverse industries, critical challenges continue to surface around data storage—no matter how advanced the model, its performance hinges on the ability to access vast amounts of data quickly, securely, and reliably. Without the right data storage infrastructure, even the most powerful AI systems can be brought to a crawl by slow, fragmented, or inefficient data pipelines.

Leon Yen
Created by VentureBeat using DALL-E

Beyond sycophancy: DarkBench exposes six hidden ‘dark patterns’ lurking in today’s top LLMs

When OpenAI rolled out its ChatGPT-4o update in mid-April 2025, users and the AI community were stunned—not by any groundbreaking feature or capability, but by something deeply unsettling: the updated model's tendency toward excessive sycophancy. It flattered users indiscriminately, showed uncritical agreement, and even offered support for harmful or dangerous ideas, including terrorism-related machinations.

Leon Yen