
The three disciplines separating AI agent demos from real-world deployment
Enterprises are getting AI agents to 80–90% autonomy in production without multi-year data overhauls. Here's the methodology.
Taryn Plumb
Why enterprises are replacing generic AI with tools that know their users
The bar for enterprise AI just got a lot higher — and most tools aren't clearing it.
Taryn Plumb
LangChain's CEO argues that better models alone won't get your AI agent to production
The real barrier between agent demo and production deployment isn't model capability — it's the harness around it, LangChain's Harrison Chase says.
Taryn Plumb
Intuit is betting its 40 years of small business data can outlast the SaaSpocalypse
Intuit lost 42% of its market cap to the SaaSpocalypse. Here's what it says AI agents can't replace.
Taryn Plumb
Enterprise MCP adoption is outpacing security controls
AI agents now have more access to enterprise systems than any other software — and MCP is making them harder to secure, not easier. Zendesk and Resolve AI on why existing frameworks aren't built for this.
Taryn Plumb
8 billion tokens a day forced AT&T to rethink AI orchestration — and cut costs by 90%
AT&T's chief data officer shares how rearchitecting around small language models and multi-agent stacks cut AI costs by 90% at 8 billion tokens a day.
Taryn Plumb
When accurate AI is still dangerously incomplete
LexisNexis' chief AI officer explains why standard RAG fails in high-stakes legal AI — and how graph RAG, planner agents, and reflection agents are closing the gap on accuracy, citation quality, and completeness.
Taryn Plumb
What AI builders can learn from fraud models that run in 300 milliseconds
Mastercard's Decision Intelligence Pro uses recurrent neural networks to analyze 160 billion yearly transactions in under 50 milliseconds, delivering precise fraud risk scores at 70,000 transactions per second during peak periods.
Taryn Plumb
The ‘brownie recipe problem’: why LLMs must have fine-grained context to deliver real-time results
Today’s LLMs excel at reasoning, but can still struggle with context. This is particularly true in real-time ordering systems like Instacart.

Shared memory is the missing layer in AI orchestration
The key to successful AI agents within an enterprise? Shared memory and context.

Why LinkedIn says prompting was a non-starter — and small models was the breakthrough
Erran Berger, VP of product engineering at LinkedIn, and team set to develop a highly detailed product policy document to fine-tune an initially massive 7-billion-parameter model, that was then optimized to a few hundred million parameters.
Taryn Plumb
AI agents can talk — orchestration is what makes them work together
Rather than asking how AI agents can work for them, a key question in enterprise is now: Are agents playing well together?