
When AI turns software development inside-out: 170% throughput at 80% headcount
Many people have tried AI tools and walked away unimpressed. I get it — many demos promise magic, but in practice, the results can feel underwhelming.

Many people have tried AI tools and walked away unimpressed. I get it — many demos promise magic, but in practice, the results can feel underwhelming.

Not long ago, the idea of being a “generalist” in the workplace had a mixed reputation. The stereotype was the “jack of all trades” who could dabble in many disciplines but was a “master of none.” And for years, that was more or less true.

Look, we've spent the last 18 months building production AI systems, and we'll tell you what keeps us up at night — and it's not whether the model can answer questions. That's table stakes now. What haunts us is the mental image of an agent autonomously approving a six-figure vendor contract at 2 a.m. because someone typo'd a config file.

For more than two decades, digital businesses have relied on a simple assumption: When someone interacts with a website, that activity reflects a human making a conscious choice. Clicks are treated as signals of interest. Time on page is assumed to indicate engagement. Movement through a funnel is interpreted as intent. Entire growth strategies, marketing budgets, and product decisions have been built on this premise.
Deep insights for enterprise AI, data, and security leaders

Recent reports about AI project failure rates have raised uncomfortable questions for organizations investing heavily in AI. Much of the discussion has focused on technical factors like model accuracy and data quality, but after watching dozens of AI initiatives launch, I’ve noticed that the biggest opportunities for improvement are often cultural, not technical.

It’s always the same story: A new technology appears and everyone starts talking about how it’ll change everything. Then capital rushes in, companies form overnight, and valuations climb faster than anyone can justify. Then, many many months later, the warnings arrive, and people suddenly remember the dot-com crash or crypto.

With agentic AI, businesses are conducting business more dynamically. Instead of traditional pre-programmed bots and static rules, agents can now “think” and invent alternate paths when unseen conditions arise. For instance, using a business domain ontology like FIBO (financial industry business ontology) can help keep agents within guardrails and avoid unwanted behavior.

“When you get a demo and something works 90% of the time, that’s just the first nine.” — Andrej Karpathy

Most people don’t appreciate the profound threat that AI will soon pose to human agency. A common refrain is that “AI is just a tool,” and like any tool, its benefits and dangers depend on how people use it. This is old-school thinking. AI is transitioning from tools we use to prosthetics we wear. This will create significant new threats we’re just not prepared for.

AI is evolving beyond a helpful tool to an autonomous agent, creating new risks for cybersecurity systems. Alignment faking is a new threat where AI essentially “lies” to developers during the training process.

Most discussions about vibe coding usually position generative AI as a backup singer rather than the frontman: Helpful as a performer to jump-start ideas, sketch early code structures and explore new directions more quickly. Caution is often urged regarding its suitability for production systems where determinism, testability and operational reliability are non-negotiable.

Traditional software governance often uses static compliance checklists, quarterly audits and after-the-fact reviews. But this method can't keep up with AI systems that change in real time. A machine learning (ML) model might retrain or drift between quarterly operational syncs. This means that, by the time an issue is discovered, hundreds of bad decisions could already have been made. This can be almost impossible to untangle.