Failed AI projects
Featured

6 proven lessons from the AI projects that broke before they scaled

Companies hate to admit it, but the road to production-level AI deployment is littered with proof of concepts (PoCs) that go nowhere, or failed projects that never deliver on their goals. In certain domains, there’s little tolerance for iteration, especially in something like life sciences, when the AI application is facilitating new treatments to markets or diagnosing diseases. Even slightly inaccurate analyses and assumptions early on can create sizable downstream drift in ways that can be concerning.

Kavin Xavier
Subscribe to get latest news!

Deep insights for enterprise AI, data, and security leaders

nuneybits Vector art of AI debugging maze 2b3bfc89-ab17-4839-a03e-6331a42f5030

Chronosphere takes on Datadog with AI that explains itself, not just outages

The new features combine AI-driven analysis with what Chronosphere calls a Temporal Knowledge Graph, a continuously updated map of an organization's services, infrastructure dependencies, and system changes over time. The technology aims to address a mounting challenge in enterprise software: developers are writing code faster than ever with AI assistance, but troubleshooting remains largely manual, creating bottlenecks when applications fail.

Michael Nuñez
Recursion-Wonder

Ship fast, optimize later: top AI engineers don't care about cost — they're prioritizing deployment

Across industries, rising compute expenses are often cited as a barrier to AI adoption — but leading companies are finding that cost is no longer the real constraint. The tougher challenges (and the ones top of mind for many tech leaders)? Latency, flexibility and capacity. At Wonder, for instance, AI adds a mere few cents per order; the food delivery and takeout company is much more concerned with cloud capacity with skyrocketing demands. Recursion, for its part, has been focused on balancing small and larger-scale training and deployment via on-premises clusters and the cloud; this has afforded the biotech company flexibility for rapid experimentation. The companies’ true in-the-wild experiences highlight a broader industry trend: For enterprises operating AI at scale, economics aren't the key decisive factor — the conversation has shifted from how to pay for AI to how fast it can be deployed and sustained. AI leaders from the two companies recently sat down with Venturebeat’s CEO and editor-in-chief Matt Marshall as part of VB’s traveling AI Impact Series. Here’s what they shared.

Taryn Plumb
diffusion process

NYU’s new AI architecture makes high-quality image generation faster and cheaper

Researchers at New York University have developed a new architecture for diffusion models that improves the semantic representation of the images they generate. “Diffusion Transformer with Representation Autoencoders” (RAE) challenges some of the accepted norms of building diffusion models. The NYU researcher's model is more efficient and accurate than standard diffusion models, takes advantage of the latest research in representation learning and could pave the way for new applications that were previously too difficult or expensive.

Ben Dickson

Data Infrastructure

View all
Recursion-Wonder

Ship fast, optimize later: top AI engineers don't care about cost — they're prioritizing deployment

Across industries, rising compute expenses are often cited as a barrier to AI adoption — but leading companies are finding that cost is no longer the real constraint. The tougher challenges (and the ones top of mind for many tech leaders)? Latency, flexibility and capacity. At Wonder, for instance, AI adds a mere few cents per order; the food delivery and takeout company is much more concerned with cloud capacity with skyrocketing demands. Recursion, for its part, has been focused on balancing small and larger-scale training and deployment via on-premises clusters and the cloud; this has afforded the biotech company flexibility for rapid experimentation. The companies’ true in-the-wild experiences highlight a broader industry trend: For enterprises operating AI at scale, economics aren't the key decisive factor — the conversation has shifted from how to pay for AI to how fast it can be deployed and sustained. AI leaders from the two companies recently sat down with Venturebeat’s CEO and editor-in-chief Matt Marshall as part of VB’s traveling AI Impact Series. Here’s what they shared.

Taryn Plumb
Ironwood board

Google debuts AI chips with 4X performance boost, secures Anthropic megadeal worth billions

The announcement, made Thursday, centers on Ironwood, Google's latest custom AI accelerator chip, which will become generally available in the coming weeks. In a striking validation of the technology, Anthropic, the AI safety company behind the Claude family of models, disclosed plans to access up to one million of these TPU chips — a commitment worth tens of billions of dollars and among the largest known AI infrastructure deals to date.

Michael Nuñez
Credit: VentureBeat made with Midjourney

Snowflake builds new intelligence that goes beyond RAG to query and aggregate thousands of documents at once

Enterprise AI has a data problem. Despite billions in investment and increasingly capable language models, most organizations still can't answer basic analytical questions about their document repositories. The culprit isn't model quality but architecture: Traditional retrieval augmented generation (RAG) systems were designed to retrieve and summarize, not analyze and aggregate across large document sets.

Sean Michael Kerner

Security

View all

Newsroom

View all

More

Man with moon head and toga and muscular arms typing on keyboard surrounded by monitors, blue purple tone graphic novel style AI art

Moonshot's Kimi K2 Thinking emerges as leading open source AI, outperforming GPT-5, Claude Sonnet 4.5 on key benchmarks

Even as concern and skepticism grows over U.S. AI startup OpenAI's buildout strategy and high spending commitments, Chinese open source AI providers are escalating their competition and one has even caught up to OpenAI's flagship, paid proprietary model GPT-5 in key third-party performance benchmarks with a new, free model.

Carl Franzen
machines construction building framework

Google Cloud updates its AI Agent Builder with new observability dashboard and faster build-and-deploy tools

The new features, announced today, include additional governance tools for enterprises and expanding the capabilities for creating agents with just a few lines of code, moving faster with state-of-the-art context management layers and one-click deployment, as well as managed services for scaling production and evaluation, and support for identifying agents.

Emilia David
IMG 8825

AI’s capacity crunch: Latency risk, escalating costs, and the coming surge-pricing breakpoint

The latest big headline in AI isn’t model size or multimodality — it’s the capacity crunch. At VentureBeat’s latest AI Impact stop in NYC, Val Bercovici, chief AI officer at WEKA, joined Matt Marshall, VentureBeat CEO, to discuss what it really takes to scale AI amid rising latency, cloud lock-in, and runaway costs.

VB Staff