<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Business | VentureBeat</title>
        <link>https://venturebeat.com/category/business/feed/</link>
        <description>Transformative tech coverage that matters</description>
        <lastBuildDate>Sun, 12 Apr 2026 07:44:18 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>Copyright 2026, VentureBeat</copyright>
        <item>
            <title><![CDATA[Apple integrates Anthropic’s Claude and OpenAI’s Codex into Xcode 26.3 in push for ‘agentic coding’]]></title>
            <link>https://venturebeat.com/technology/apple-integrates-anthropics-claude-and-openais-codex-into-xcode-26-3-in-push</link>
            <guid isPermaLink="false">29Q1quXmYBvP8wkUbz8x2T</guid>
            <pubDate>Tue, 03 Feb 2026 20:45:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.apple.com/">Apple</a> on Tuesday announced a major update to its flagship developer tool that gives artificial intelligence agents unprecedented control over the app-building process, a move that signals the iPhone maker&#x27;s aggressive push into an emerging and controversial practice known as &quot;agentic coding.&quot;</p><p><a href="https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/">Xcode 26.3</a>, available immediately as a release candidate, integrates Anthropic&#x27;s <a href="https://platform.claude.com/docs/en/agent-sdk/overview">Claude Agent</a> and OpenAI&#x27;s <a href="https://chatgpt.com/codex">Codex</a> directly into Apple&#x27;s development environment, allowing the AI systems to autonomously write code, build projects, run tests, and visually verify their own work — all with minimal human oversight.</p><p>The update is Apple&#x27;s most significant embrace of AI-assisted software development since introducing intelligence features in <a href="https://developer.apple.com/documentation/xcode-release-notes/xcode-26-release-notes">Xcode 26</a> last year, and arrives as &quot;<a href="https://x.com/karpathy/status/1886192184808149383?lang=en">vibe coding</a>&quot; — the practice of delegating software creation to large language models — has become one of the most debated topics in technology.</p><p>Apple says that while integrating intelligence into the Xcode developer workflow is powerful, the model itself still has a somewhat limited aperture. It answers questions based on what the developer provides, but it doesn&#x27;t have access to the full context of the project, and it&#x27;s not able to take action on its own. That changes with this update, the company said during a press conference Tuesday morning.</p><h2><b>How Apple&#x27;s new AI coding features let developers build apps faster than ever</b></h2><p>The key innovation in <a href="https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/">Xcode 26.3</a> is the depth of integration between AI agents and Apple&#x27;s development tools. Unlike previous iterations that offered code suggestions and autocomplete features, the new system grants AI agents access to nearly every aspect of the development process.</p><p>During a live demonstration, an Apple engineer showed how the Claude agent could receive a simple prompt — &quot;add a new feature to show the weather at a landmark&quot; — and then independently analyze the project&#x27;s file structure, consult Apple&#x27;s documentation, write the necessary code, build the project, and take screenshots of the running application to verify its work matched the requested design.</p><p>According to Apple, the agent is able to use tools like build and screenshot previews to verify its work, visually analyze the image, and confirm that everything has been built accordingly. Before, when interacting with a model, it would provide an answer and just stop there.</p><p>The system creates automatic checkpoints as developers interact with the AI, allowing them to roll back changes if results prove unsatisfactory — a safeguard that acknowledges the unpredictable nature of AI-generated code.</p><p>Apple says it worked directly with <a href="https://www.anthropic.com/">Anthropic</a> and <a href="https://openai.com/">OpenAI</a> to optimize the experience with particular attention paid to reducing token usage — the computational units that determine costs when using cloud-based AI models — and improving the efficiency of tool calling.</p><p>According to the company, developers can download new agents with a single click, and they update automatically.</p><h2><b>Why Apple&#x27;s adoption of the Model Context Protocol could reshape the AI development landscape</b></h2><p>Underlying the integration is the <a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol</a>, or MCP, an open standard that Anthropic developed for connecting AI agents with external tools. Apple&#x27;s adoption of MCP means that any compatible agent — not just Claude or Codex — can now interact with Xcode&#x27;s capabilities.</p><p>Apple says this also works for agents that are running outside of Xcode. Any agent that is compatible with MCP can now work with Xcode to do all the same things — project discovery and change management, building and testing apps, working with previews and code snippets, and accessing the latest documentation.</p><p>The decision to embrace an open protocol, rather than building a proprietary system, represents a notable departure for Apple, which has historically favored closed ecosystems. It also positions Xcode as a potential hub for a growing universe of AI development tools.</p><h2><b>Xcode&#x27;s troubled history with AI tools — and why Apple says this time is different</b></h2><p>The announcement comes against a backdrop of mixed experiences with AI-assisted coding in Apple&#x27;s tools. During the press conference, one developer described previous attempts to use AI agents with Xcode as &quot;horrible,&quot; citing constant crashes and an inability to complete basic tasks.</p><p>Apple acknowledged the concerns while arguing that the new integration addresses fundamental limitations of earlier approaches.</p><p>The company says the big shift is that Claude and Codex have so much more visibility into the breadth of the project. If they hallucinate and write code that doesn&#x27;t work, they can now build, see the compile errors, and iterate in real time to fix those issues — in some cases before presenting it as a finished work.</p><p>Apple argues that the power of IDE integration extends beyond error correction. Agents can now automatically add entitlements to projects when needed to access protected APIs — a task the company says would be otherwise very difficult for an AI operating outside the development environment dealing with binary files it may not have the format for.</p><h2><b>From Andrej Karpathy&#x27;s tweet to LinkedIn certifications: The unstoppable rise of vibe coding</b></h2><p>Apple&#x27;s announcement arrives at a crucial moment in the evolution of AI-assisted development. The term &quot;<a href="https://x.com/karpathy/status/1886192184808149383?lang=en">vibe coding</a>,&quot; coined by AI researcher Andrej Karpathy in early 2025, has transformed from a curiosity into a genuine cultural phenomenon that is reshaping how software gets built.</p><p>LinkedIn announced last week that it will begin offering <a href="https://techcrunch.com/2026/01/28/linkedin-will-let-you-show-off-your-vibe-coding-chops-with-a-certificate/">official certifications in AI coding skills</a>, drawing on usage data from platforms like Lovable and Replit. Job postings requiring AI proficiency doubled in the past year, according to edX research, with Indeed&#x27;s Hiring Lab reporting that 4.2% of U.S. job listings now mention AI-related keywords.</p><p>The enthusiasm is driven by genuine productivity gains. <a href="https://www.platformer.news/claude-code-review-web-design/">Casey Newton</a>, the technology journalist, recently described building a complete personal website using Claude Code in about an hour — a task that previously required expensive Squarespace subscriptions and years of frustrated attempts with various website builders.</p><p>More dramatically, <a href="https://www.reddit.com/r/OpenAI/comments/1q2uuil/google_engineer_im_not_joking_and_this_isnt_funny/">Jaana Dogan</a>, a principal engineer at Google, posted that she gave Claude Code &quot;a description of the problem&quot; and &quot;it generated what we built last year in an hour.&quot; Her post, which accumulated more than 8 million views, began with the disclaimer: &quot;I&#x27;m not joking and this isn&#x27;t funny.&quot;</p><h2><b>Security experts warn that AI-generated code could lead to &#x27;catastrophic explosions&#x27;</b></h2><p>But the rapid adoption of agentic coding has also sparked significant concerns among security researchers and software engineers.</p><p>David Mytton, founder and CEO of developer security provider Arcjet, <a href="https://thenewstack.io/vibe-coding-could-cause-catastrophic-explosions-in-2026/">warned last month</a> that the proliferation of vibe-coded applications &quot;into production will lead to catastrophic problems for organizations that don&#x27;t properly review AI-developed software.&quot;</p><p>&quot;In 2026, I expect more and more vibe-coded applications hitting production in a big way,&quot; Mytton wrote. &quot;That&#x27;s going to be great for velocity… but you&#x27;ve still got to pay attention. There&#x27;s going to be some big explosions coming!&quot;</p><p><a href="https://simonw.substack.com/p/llm-predictions-for-2026-shared-with">Simon Willison</a>, co-creator of the Django web framework, drew an even starker comparison. &quot;I think we&#x27;re due a Challenger disaster with respect to coding agent security,&quot; he said, referring to the 1986 space shuttle explosion that killed all seven crew members. &quot;So many people, myself included, are running these coding agents practically as root. We&#x27;re letting them do all of this stuff.&quot;</p><p>A <a href="https://arxiv.org/abs/2601.15494">pre-print paper</a> from researchers this week warned that vibe coding could pose existential risks to the open-source software ecosystem. The study found that AI-assisted development pulls user interaction away from community projects, reduces visits to documentation websites and forums, and makes launching new open-source initiatives significantly harder.</p><p>Stack Overflow <a href="https://futurism.com/artificial-intelligence/ai-has-basically-killed-stack-overflow">usage has plummeted</a> as developers increasingly turn to AI chatbots for answers—a shift that could ultimately starve the very knowledge bases that trained the AI models in the first place.</p><p>Previous research painted an even more troubling picture: a 2024 report found that vibe coding using tools like GitHub Copilot &quot;offered no real benefits unless adding 41% more bugs is a measure of success.&quot;</p><h2><b>The hidden mental health cost of letting AI write your code</b></h2><p>Even enthusiastic adopters have begun acknowledging the darker aspects of AI-assisted development.</p><p>Peter Steinberger, creator of the viral AI agent originally known as Clawdbot (now <a href="https://openclaw.ai/">OpenClaw</a>), recently revealed that he had to step back from vibe coding after it consumed his life.</p><p>&quot;I was out with my friends and instead of joining the conversation in the restaurant, I was just like, vibe coding on my phone,&quot; Steinberger said in a <a href="https://creatoreconomy.so/p/how-openclaws-creator-uses-ai-peter-steinberger">recent podcast interview</a>. &quot;I decided, OK, I have to stop this more for my mental health than for anything else.&quot;</p><p>Steinberger warned that the constant building of increasingly powerful AI tools creates the &quot;illusion of making you more productive&quot; without necessarily advancing real goals. &quot;If you don&#x27;t have a vision of what you&#x27;re going to build, it&#x27;s still going to be slop,&quot; he added.</p><p>Google CEO Sundar Pichai has expressed similar reservations, saying he won&#x27;t vibe code on &quot;large codebases where you really have to get it right.&quot;</p><p>&quot;The security has to be there,&quot; Pichai said in a <a href="https://blog.google/innovation-and-ai/products/sundar-pichai-ai-release-notes-podcast/">November podcast interview</a>.</p><p>Boris Cherny, the Anthropic engineer who created Claude Code, acknowledged that vibe coding works best for &quot;prototypes or throwaway code, not software that sits at the core of a business.&quot;</p><p>&quot;You want maintainable code sometimes. You want to be very thoughtful about every line sometimes,&quot; Cherny said.</p><h2><b>Apple is gambling that deep IDE integration can make AI coding safe for production</b></h2><p>Apple appears to be betting that the benefits of <a href="https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/">deep IDE integration</a> can mitigate many of these concerns. By giving AI agents access to build systems, test suites, and visual verification tools, the company is essentially arguing that Xcode can serve as a quality control mechanism for AI-generated code.</p><p>Susan Prescott, Apple&#x27;s vice president of Worldwide Developer Relations, framed the update as part of Apple&#x27;s broader mission.</p><p>In a statement, Apple said its goal is to make tools that put industry-leading technologies directly in developers&#x27; hands so they can build the very best apps. The company says agentic coding supercharges productivity and creativity, streamlining the development workflow so developers can focus on innovation.</p><p>But the question remains whether the safeguards will prove sufficient as AI agents grow more autonomous. Asked about debugging capabilities, Apple noted that while Xcode has a powerful debugger built in, there is no direct MCP tool for debugging.</p><p>Developers can run the debugger and manually relay information to the agent, but the AI cannot yet independently investigate runtime issues — a limitation that could prove significant as the complexity of AI-generated code increases.</p><p>The update also does not currently support running multiple agents simultaneously on the same project, though Apple noted that developers can open projects in multiple Xcode windows using Git worktrees as a workaround.</p><h2><b>The future of software development hangs in the balance — and Apple just raised the stakes</b></h2><p><a href="https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/">Xcode 26.3</a> is available immediately as a release candidate for members of the Apple Developer Program, with a general release expected soon on the App Store. The release candidate designation — Apple&#x27;s final beta before production — means developers who download today will automatically receive the finished version when it ships.</p><p>The integration supports both API keys and direct account credentials from <a href="https://openai.com/">OpenAI</a> and <a href="https://www.anthropic.com/">Anthropic</a>, offering developers flexibility in managing their AI subscriptions. But those conveniences belie the magnitude of what Apple is attempting: nothing less than a fundamental reimagining of how software comes into existence.</p><p>For the world&#x27;s most valuable company, the calculus is straightforward. Apple&#x27;s ability to attract and retain developers has always underpinned its platform dominance. If agentic coding delivers on its promise of radical productivity gains, early and deep integration could cement Apple&#x27;s position for another generation. If it doesn&#x27;t — if the security disasters and &quot;catastrophic explosions&quot; that critics predict come to pass — Cupertino could find itself at the epicenter of a very different kind of transformation.</p><p>The technology industry has spent decades building systems to catch human errors before they reach users. Now it must answer a more unsettling question: What happens when the errors aren&#x27;t human at all?</p><p>As Apple conceded during Tuesday&#x27;s press conference, with what may prove to be unintentional understatement: &quot;Large language models, as agents sometimes do, sometimes hallucinate.&quot;</p><p>Millions of lines of code are about to find out how often.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Technology</category>
            <category>Orchestration</category>
            <category>Business</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/2Q8OGhbv336AGtWlkPE9wO/51480cf6096208fab81a5f6c184a9a20/nuneybits_Vector_art_of_Apple_logo_inside_code_brackets_63cd7c00-dd68-44a7-b323-831a5a2226c0.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[A European AI challenger goes after GitHub Copilot: Mistral launches Vibe 2.0]]></title>
            <link>https://venturebeat.com/technology/a-european-ai-challenger-goes-after-github-copilot-mistral-launches-vibe-2-0</link>
            <guid isPermaLink="false">6PDQjQNY0Dv0KRgtpCwung</guid>
            <pubDate>Tue, 27 Jan 2026 15:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://mistral.ai/">Mistral AI</a>, the French artificial intelligence company that has positioned itself as Europe&#x27;s leading challenger to American AI giants, announced on Tuesday the general availability of <a href="https://mistral.ai/products/vibe">Mistral Vibe 2.0</a>, a significant upgrade to its terminal-based coding agent that’s the startup&#x27;s most aggressive push yet into the competitive AI-assisted software development market.</p><p>The release is a pivotal moment for the Paris-based company, which is transitioning its developer tools from a free testing phase to a commercial product integrated with its paid subscription plans. The move comes just days after Mistral CEO Arthur Mensch told Bloomberg Television at the World Economic Forum in Davos that the company expects to cross <a href="https://www.bloomberg.com/news/videos/2026-01-22/mistral-ceo-china-behind-in-ai-is-a-fairy-tale-video">€1 billion in revenue by the end of 2026</a> — a projection that would still leave it far behind American competitors but would cement its position as Europe&#x27;s preeminent AI firm.</p><p>&quot;The announcement is more of an upgrade and general availability,&quot; Timothée Lacroix, cofounder of Mistral, said in an exclusive interview with VentureBeat. &quot;We produced Devstral 2 in December, and we released at the time a first version of Vibe. Everything was free and in testing. Now we have finalized and improved the CLI, and we are moving Mistral Vibe to a paid plan that&#x27;s bundled with our Le Chat plans.&quot;</p><div></div><h2><b>Why legacy enterprise code is AI&#x27;s blind spot</b></h2><p><a href="https://mistral.ai/products/vibe">Mistral Vibe 2.0</a> arrives as technology executives across industries grapple with a fundamental tension: the promise of AI-powered coding tools is immense, but the most capable models are controlled by a handful of American companies — OpenAI, Anthropic, and Google — whose closed-source approaches leave enterprises with limited control over their most sensitive intellectual property.</p><p>Mistral is betting that its open-source approach, combined with deep customization capabilities, will appeal to organizations wary of sending proprietary code to third-party providers. The strategy targets a specific pain point that Lacroix says plagues enterprises with legacy systems.</p><p>&quot;The code bases that large enterprise work with are large and have been built upon years and years, and they haven&#x27;t seen the web,&quot; Lacroix explained. &quot;They potentially rely on large libraries or large domain-specific languages that are unknown to typical language models. And so what we&#x27;re able to do with the Vibe CLI and our models is to go and customize them to a customer&#x27;s code base and its specific IP to get an improved experience.&quot;</p><p>This customization capability addresses a limitation that has frustrated many enterprise technology leaders: general-purpose AI coding assistants trained on public code repositories often struggle with proprietary frameworks, internal coding conventions, and domain-specific languages that exist only within corporate walls. A bank&#x27;s internal trading system, a manufacturer&#x27;s proprietary control software, or a pharmaceutical company&#x27;s research pipeline may rely on decades of accumulated code written in conventions that no public AI model has ever encountered.</p><h2><b>Custom subagents and clarification prompts give developers more control</b></h2><p>The updated <a href="https://youtu.be/-0kJLh9du4c?si=fEFe-grdq056lE0H">Vibe CLI</a> introduces several features designed to give developers more granular control over how the AI agent operates. Custom subagents allow organizations to build specialized AI agents for targeted tasks—such as deployment scripts, pull request reviews, or test generation—that can be invoked on demand rather than relying on a single general-purpose assistant.</p><p><a href="https://youtu.be/GE0k2rd0jco?si=CweHAKZdaY2PYUw1">Multi-choice clarifications</a> are a departure from the behavior of many AI coding tools that attempt to infer developer intent when instructions are ambiguous. Instead, Vibe 2.0 prompts users with options before taking action, reducing the risk of unwanted code changes. Slash-command skills enable developers to load preconfigured workflows for common tasks like deploying, linting, or generating documentation through simple commands. Unified agent modes allow teams to configure custom operational modes that combine specific tools, permissions, and behaviors, enabling developers to switch contexts without switching between different applications. The tool also now ships with continuous updates through the command line, eliminating the need for manual version management.</p><div></div><p><a href="https://mistral.ai/products/vibe">Mistral Vibe 2.0</a> is available through two subscription tiers. <a href="https://mistral.ai/products/le-chat">The Le Chat Pro plan</a> costs $14.99 per month and provides full access to the Vibe CLI and Devstral 2, the underlying model that powers the agent, with students receiving a 50 percent discount. <a href="https://mistral.ai/products/le-chat">The Le Chat Team plan</a>, priced at $24.99 per seat per month, adds unified billing, administrative controls, and priority support for organizations. </p><p>Both plans include generous usage allowances for sustained development work, with the option to continue beyond limits through pay-as-you-go pricing at API rates. The underlying Devstral 2 model, which previously was offered free through Mistral&#x27;s API during a testing period, now moves to paid access with input pricing of $0.40 per million tokens and output pricing of $2.00 per million tokens.</p><h2><b>Smaller, denser models challenge the bigger-is-better assumption</b></h2><p>The <a href="https://mistral.ai/news/devstral-2-vibe-cli">Devstral 2 model family</a> that powers Vibe CLI is Mistral&#x27;s bet that smaller, more efficient models can compete with — and in some cases outperform — the massive systems built by better-funded American rivals. Devstral 2, a 123-billion-parameter dense transformer, achieves 72.2 percent on <a href="https://www.swebench.com/">SWE-bench Verified</a>, a widely used benchmark for evaluating AI systems&#x27; ability to solve real-world software engineering problems.</p><p>Perhaps more significant for enterprise deployment, the model is roughly five times smaller than <a href="https://huggingface.co/deepseek-ai/DeepSeek-V3.2">DeepSeek V3.2</a> and eight times smaller than <a href="https://www.kimi.com/en">Kimi K2</a> — Chinese models that have drawn attention for matching American AI systems at a fraction of the cost. The smaller Devstral 2 Small, at 24 billion parameters, can run on consumer hardware including laptops.</p><p>&quot;Those two models are dense, which makes it also—I mean, the small one is something that can run on a laptop, really, which is great if you&#x27;re working on the train,&quot; Lacroix noted. &quot;But the fact that the larger one is also dense is interesting for on-prem or more resource-constrained usage, where it&#x27;s easier to get efficient use of a dense model rather than large mixture of experts, and it requires smaller hardware to start.&quot;</p><p>The distinction between dense and mixture-of-experts architectures is technically significant. While <a href="https://huggingface.co/blog/moe">mixture-of-experts models</a> can theoretically offer more capability per compute dollar by activating only portions of their parameters for any given task, they require more complex infrastructure to deploy efficiently. <a href="https://huggingface.co/blog/moe#when-to-use-sparse-moes-vs-dense-models">Dense models</a>, by contrast, activate all parameters for every computation but are more straightforward to run on conventional hardware — a meaningful consideration for enterprises that want to deploy AI systems on their own infrastructure rather than relying on cloud providers.</p><h2><b>Banks and defense contractors want AI that never leaves their walls</b></h2><p>For regulated industries — particularly financial services, healthcare, and defense — the question of where AI models run and who has access to the data they process is not merely technical but existential. Banks cannot send proprietary trading algorithms to external AI providers. Healthcare organizations face strict regulations about patient data. Defense contractors operate under security clearances that prohibit sharing sensitive information with foreign entities.</p><p>Lacroix suggests that the on-premises deployment capability, while important, is secondary to a more fundamental concern about ownership and control. &quot;The fact that it&#x27;s on-prem, I think, is less relevant than the fact that it&#x27;s owned by the company and that it&#x27;s on wherever they feel safe moving that data — like they&#x27;re not shipping the entire code base to a third party,&quot; he said. &quot;I think that&#x27;s important.&quot;</p><p>This framing positions Mistral not merely as a vendor of AI tools but as a partner in building proprietary AI capabilities that become strategic assets for client organizations. &quot;When we work with a company to then customize them and potentially fine-tune them or continue pre-training them, then they become assets to that company, and they are their own competitive advantage, really,&quot; Lacroix explained.</p><p>Mistral has actively cultivated relationships with governments to underscore this positioning. The company serves defense ministries in Europe and Southeast Asia, both directly and through defense contractors. At Davos, Mensch described AI as critical not only to economic sovereignty but to &quot;strategic sovereignty,&quot; noting that autonomous systems like drones require AI capabilities and that deterrence in this domain is increasingly important.</p><h2><b>Mistral&#x27;s CEO dismisses the idea that China lags in artificial intelligence</b></h2><p>Mistral&#x27;s positioning as a European alternative to American AI giants takes on added significance amid rising geopolitical tensions. At the World Economic Forum, Mensch was characteristically blunt about the competitive landscape, dismissing claims that Chinese AI development lags the United States as a &quot;fairy tale.&quot;</p><p>&quot;China is not behind the West,&quot; Mensch said in his <a href="https://www.bloomberg.com/news/videos/2026-01-22/mistral-ceo-china-behind-in-ai-is-a-fairy-tale-video">Bloomberg Television interview</a>. The capabilities of China&#x27;s open-source technology, he added, are &quot;probably stressing the CEOs in the U.S.&quot;</p><p>The comments reflect a broader anxiety in the AI industry about the durability of American technological leadership. Chinese companies including <a href="https://www.deepseek.com/">DeepSeek</a> and <a href="https://www.alibaba.com/">Alibaba</a> have released open-source models that match or exceed many American systems, often at dramatically lower costs. For Mistral, this competitive pressure validates its strategy of focusing on efficiency and customization rather than attempting to match the massive training runs of better-capitalized American rivals.</p><p>European Commission digital chief <a href="https://www.lemonde.fr/en/economy/article/2026/01/22/french-ai-firm-mistral-predicts-revenue-of-1-billion-in-2026_6749706_19.html">Henna Virkkunen</a>, also speaking at Davos, underscored the strategic importance of technological sovereignty. &quot;It&#x27;s so important that we are not dependent on one country or one company when it comes to some very critical fields of our economy or society,&quot; she said.</p><p>For American enterprise customers, Lacroix suggests that Mistral&#x27;s European identity and government relationships need not be a concern — and may even be an advantage. &quot;One of the benefits when working as we do, like with open weights, and especially when deploying on customers&#x27; premises and giving them control, is that the wider geopolitics don&#x27;t necessarily matter that much,&quot; he said. &quot;I think the benefits of the open-source scene is that it gives you confidence that you know what you&#x27;re using, and you&#x27;re in total control of it.&quot;</p><h2><b>From model maker to enterprise platform signals a strategic pivot</b></h2><p>Mistral&#x27;s transition from a pure model company to what Lacroix describes as &quot;a full enterprise platform around developing AI applications&quot; reflects a broader maturation in the AI industry. The realization that model weights alone do not capture the full value of AI systems has pushed companies across the sector toward more integrated offerings.</p><p>&quot;We don&#x27;t think the only value we provide is in the model,&quot; Lacroix said. &quot;We started as a models company. We are now building a full enterprise platform around developing AI applications. We have a part of our company that provides services to integrate deeply. And so the way we make money, and I guess the question behind this is the value that is core to Mistral, is that full-stack solution to getting to the ROI of AI.&quot;</p><p>This full-stack approach includes fine-tuning on internal languages and domain-specific languages, reinforcement learning with customer-specific environments, and end-to-end code modernization services that can migrate entire codebases to modern technology stacks. Mistral says it already delivers these solutions to some of the world&#x27;s largest organizations in finance, defense, and infrastructure.</p><p>The revenue milestone Mensch projected at Davos — crossing €1 billion by year&#x27;s end — would represent remarkable growth for a company founded in 2023. But it would still leave Mistral far behind American competitors whose valuations stretch into the hundreds of billions. OpenAI, now reportedly valued at <a href="https://www.cnbc.com/2024/09/13/tiger-global-plans-to-join-openais-latest-funding-round-which-would-value-it-at-more-than-150-billion.html">more than $150 billion</a>, and Anthropic, valued at approximately <a href="https://www.cnbc.com/2025/01/07/anthropic-in-talks-to-raise-funding-at-60-billion-valuation.html">$60 billion</a>, operate at a scale that Mistral cannot match through organic growth alone. To close the gap, Mistral is looking at acquisitions. &quot;We are in the process of looking at a few opportunities,&quot; Mensch said at Davos, though he declined to specify target business areas or geographic regions. The company&#x27;s September fundraise brought in €1.7 billion, with Dutch semiconductor equipment giant ASML joining as a key investor, valuing Mistral at €11.7 billion.</p><h2><b>The coding assistant wars are just getting started</b></h2><p>Looking beyond the immediate product announcement, Lacroix sees the current generation of AI coding tools as a transitional phase toward more autonomous software development. &quot;For a few tasks, it&#x27;s already becoming the default entry point — like if I want to prototype something, or if I want to quickly iterate on an idea. I think it&#x27;s already faster,&quot; he said. &quot;What I see today is there is still some story that needs to happen on how you do the work asynchronously and in a way where it&#x27;s easy to orchestrate several tasks and several improvements on the same code base in a flow that feels natural.&quot;</p><p>The current experience, he suggests, does not yet feel like having &quot;your own team of developers that can really 10x yourself.&quot; But he expects rapid improvement, driven by abundant training data and intense industry interest. Perhaps more ambitiously, Lacroix sees the file-manipulation and tool-calling capabilities built for coding as applicable far beyond software development. &quot;What I&#x27;m really excited about is the use of these tools outside of coding,&quot; he said. &quot;The really strong realization is you now have an agent that is great at working with a file system, that can edit information and that expands its context a lot, and it&#x27;s really great at using all sorts of tools. Those tools don&#x27;t need to be necessarily related to coding, really.&quot;</p><p>For chief technology officers and engineering leaders evaluating AI coding tools, Mistral&#x27;s announcement crystallizes the strategic choice now facing enterprises: accept the convenience and raw capability of closed-source American models, or bet on the flexibility and control of open-source alternatives that can be customized and deployed behind corporate firewalls. Human evaluations comparing Devstral 2 against Claude Sonnet 4.5 showed that Anthropic&#x27;s model was &quot;significantly preferred,&quot; according to Mistral&#x27;s own benchmarking — an acknowledgment that closed-source leaders retain advantages that efficiency and customization cannot fully offset.</p><p>But Lacroix is betting that for enterprises with proprietary code, legacy systems, and regulatory constraints, customization will matter more than raw performance on public benchmarks. &quot;The point is that you can now get all of this vibe coding disruption and goodness in an environment where customization is needed, which was difficult before,&quot; he said. &quot;And that&#x27;s, I think, the main point that we&#x27;re making with this announcement.&quot;</p><p>The AI coding wars, in other words, are no longer just about which model writes the best code. They&#x27;re about who gets to own the model that understands yours.
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Technology</category>
            <category>Business</category>
            <category>Infrastructure</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/3YLcY3tgbv5EFx2N4c7ozJ/6bf2e3964aa31954df08fac31fe0ca11/nuneybits_Vector_art_of_bold_orange_gradient_background_giant_w_de5ae005-57bc-4419-99b6-35be2ec9522f.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[While everyone talks about an AI bubble, Salesforce quietly added 6,000 enterprise customers in 3 months]]></title>
            <link>https://venturebeat.com/technology/while-everyone-talks-about-an-ai-bubble-salesforce-quietly-added-6-000</link>
            <guid isPermaLink="false">1P1fzo4Fk8V5cXDEjKH6OF</guid>
            <pubDate>Mon, 22 Dec 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[<p>While Silicon Valley debates whether artificial intelligence has become an <a href="https://www.reuters.com/business/finance/opinions-split-over-ai-bubble-after-billions-invested-2025-10-16/">overinflated bubble</a>, Salesforce&#x27;s enterprise AI platform quietly added 6,000 new customers in a single quarter — a 48% increase that executives say demonstrates a widening gap between speculative AI hype and deployed enterprise solutions generating measurable returns.</p><p><a href="https://www.salesforce.com/agentforce/">Agentforce</a>, the company&#x27;s autonomous AI agent platform, now serves 18,500 enterprise customers, up from 12,500 the prior quarter. Those customers collectively run more than three billion automated workflows monthly and have pushed Salesforce&#x27;s agentic product revenue past $540 million in annual recurring revenue, according to figures the company shared with VentureBeat. The platform has processed over three trillion tokens — the fundamental units that large language models use to understand and generate text — positioning Salesforce as one of the largest consumers of AI compute in the enterprise software market.</p><p>&quot;This has been a year of momentum,&quot; Madhav Thattai, Salesforce&#x27;s Chief Operating Officer for AI, said in an exclusive interview with VentureBeat. &quot;We crossed over half a billion in ARR for our agentic products, which have been out for a couple of years. And so that&#x27;s pretty remarkable for enterprise software.&quot;</p><p>The numbers arrive amid intensifying scrutiny of AI spending across corporate America. Venture capitalists and analysts have questioned whether the <a href="https://www.wsj.com/tech/it-really-is-possible-to-spend-too-much-on-ai-7bb68df1">billions pouring into AI infrastructure</a> — from data centers to graphics processing units to model development — will ever generate proportionate returns. <a href="https://ai.meta.com/">Meta</a>, <a href="https://www.microsoft.com/en-us/">Microsoft</a>, and <a href="https://www.amazon.com/">Amazon</a> have committed tens of billions to AI infrastructure, prompting some investors to ask whether the enthusiasm has outpaced the economics.</p><p>Yet the <a href="https://www.salesforce.com/">Salesforce</a> data suggests that at least one segment of the AI market — enterprise workflow automation — is translating investments into concrete business outcomes at a pace that defies the bubble narrative.</p><h2><b>Why enterprise AI trust has become the defining challenge for CIOs in 2025</b></h2><p>The distinction between AI experimentation and AI deployment at scale comes down to one word that appeared repeatedly across interviews with Salesforce executives, customers, and independent analysts: trust.</p><p><a href="http://m">Dion Hinchcliffe</a>, who leads the CIO practice at technology research firm <a href="https://futurumgroup.com/">The Futurum Group</a>, said the urgency around enterprise AI has reached a fever pitch not seen in previous technology cycles. His firm recently completed a <a href="https://futurumgroup.com/insights/agentic-ai-platforms-for-enterprise-futurum-signal/">comprehensive analysis of agentic AI platforms</a> that ranked Salesforce slightly ahead of Microsoft as the market leader.</p><p>&quot;I&#x27;ve been through revolution after revolution in this business,&quot; Hinchcliffe said. &quot;I&#x27;ve never seen anything like this before. In my entire career, I&#x27;ve never seen this level of business focus—boards of directors are directly involved, saying this is existential for the company.&quot;</p><p>The pressure flows downward. CIOs who once managed technology as a cost center now field questions directly from board members demanding to know how their companies will avoid being disrupted by AI-native competitors.</p><p>&quot;They&#x27;re pushing the CIO hard, asking, &#x27;What are we doing? How do we make sure we&#x27;re not put out of business by the next AI-first company that reimagines what we do?&#x27;&quot; Hinchcliffe said.</p><p>But that pressure creates a paradox. Companies want to move fast on AI, yet the very autonomy that makes AI agents valuable also makes them dangerous. An agent that can independently execute workflows, process customer data, and make decisions without human intervention can also make mistakes at machine speed — or worse, be manipulated by bad actors.</p><p>This is where enterprise AI platforms differentiate themselves from the consumer AI tools that dominate headlines. According to Hinchcliffe, building a production-grade agentic AI system requires hundreds of specialized engineers working on governance, security, testing, and orchestration — infrastructure that most companies cannot afford to build themselves.</p><p>&quot;The average enterprise-grade agentic team is 200-plus people working on an agentic platform,&quot; Hinchcliffe said. &quot;Salesforce has over 450 people working on agent AI.&quot;</p><p>Early in the AI adoption cycle, many CIOs attempted to build their own agent platforms using open-source tools like <a href="https://www.langchain.com/">LangChain</a>. They quickly discovered the complexity exceeded their resources.</p><p>&quot;They very quickly realized this problem was much bigger than expected,&quot; Hinchcliffe explained. &quot;To deploy agents at scale, you need infrastructure to manage them, develop them, test them, put guardrails on them, and govern them — because you&#x27;re going to have tens of thousands, hundreds of thousands, even millions of long-running processes out there doing work.&quot;</p><h2><b>How AI guardrails and security layers separate enterprise platforms from consumer chatbots</b></h2><p>The technical architecture that separates enterprise AI platforms from consumer tools centers on what the industry calls a &quot;trust layer&quot; — a set of software systems that monitor, filter, and verify every action an AI agent attempts to take.</p><p>Hinchcliffe&#x27;s research found that <a href="https://futurumgroup.com/insights/agentic-ai-platforms-for-enterprise-futurum-signal/">only about half of the agentic AI platforms</a> his firm evaluated included runtime trust verification — the practice of checking every transaction for policy compliance, data toxicity, and security violations as it happens, rather than relying solely on design-time constraints that can be circumvented.</p><p>&quot;Salesforce puts every transaction, without exception, through that trust layer,&quot; Hinchcliffe said. &quot;That&#x27;s best practice, in our view. If you don&#x27;t have a dedicated system checking policy compliance, toxicity, grounding, security, and privacy on every agentic activity, you can&#x27;t roll it out at scale.&quot;</p><p><a href="https://www.bloomberg.com/profile/person/24214076">Sameer Hasan</a>, who serves as Chief Technology and Digital Officer at Williams-Sonoma Inc., said the trust layer proved decisive in his company&#x27;s decision to adopt Agentforce across its portfolio of brands, which includes <a href="https://www.potterybarn.com/">Pottery Barn</a>, <a href="https://www.westelm.com/">West Elm</a>, and the flagship <a href="https://www.williams-sonoma.com/">Williams-Sonoma</a> stores that together serve approximately 20% of the U.S. home furnishings market.</p><p>&quot;The area that caused us to make sure—let&#x27;s be slow, let&#x27;s not move too fast, and let this get out of control—is really around security, privacy, and brand reputation,&quot; Hasan said. &quot;The minute you start to put this tech in front of customers, there&#x27;s the risk of what could happen if the AI says the wrong thing or does the wrong thing. There&#x27;s plenty of folks out there that are intentionally trying to get the AI to do the wrong thing.&quot;</p><p>Hasan noted that while the underlying large language models powering Agentforce — including technology from <a href="https://openai.com/">OpenAI</a> and <a href="https://www.anthropic.com/">Anthropic</a> — are broadly available, the enterprise governance infrastructure is not.</p><p>&quot;We all have access to that. You don&#x27;t need Agentforce to go build a chatbot,&quot; Hasan said. &quot;What Agentforce helped us do more quickly and with more confidence is build something that&#x27;s more enterprise-ready. So there&#x27;s toxicity detection, the way that we handle PII and PII tokenization, data security and creating specific firewalls and separations between the generative tech and the functional tech, so that the AI doesn&#x27;t have the ability to just go comb through all of our customer and order data.&quot;</p><p>The trust concerns appear well-founded. The Information reported that among Salesforce&#x27;s own executives, <a href="https://www.theinformation.com/articles/salesforce-executives-say-trust-generative-ai-declined">trust in generative AI has actually declined</a> — an acknowledgment that even insiders recognize the technology requires careful deployment.</p><h2><b>Corporate travel startup Engine deployed an AI agent in 12 days and saved $2 million</b></h2><p>For <a href="https://engine.com/b">Engine</a>, a corporate travel platform valued at <a href="https://www.reuters.com/technology/travel-tech-startup-hotel-engine-valued-21-bln-after-latest-fundraise-2024-09-17/">$2.1 billion </a>following its Series C funding round, the business case for Agentforce crystallized around a specific customer pain point: cancellations.</p><p><a href="https://www.salesforce.com/customer-stories/engine/">Demetri Salvaggio</a>, Engine&#x27;s Vice President of Customer Experience and Operations, said his team analyzed customer support data and discovered that cancellation requests through chat channels represented a significant volume of contacts — work that required human agents but followed predictable patterns.</p><p>Engine deployed its first AI agent, named Eva, in just 12 business days. The speed surprised even Salvaggio, though he acknowledged that Engine&#x27;s existing integration with Salesforce&#x27;s broader platform provided a foundation that accelerated implementation.</p><p>&quot;We saw success right away,&quot; Salvaggio said. &quot;But we went through growing pains, too. Early on, there wasn&#x27;t the observability you&#x27;d want at your fingertips, so we were doing a lot of manual work.&quot;</p><p>Those early limitations have since been addressed through <a href="https://www.salesforce.com/form/conf/agentforce-demos/">Salesforce&#x27;s Agentforce Studio</a>, which now provides real-time analytics showing exactly where AI agents struggle with customer questions — data that allows companies to continuously refine agent behavior.</p><p>The business results, according to Salvaggio, have been substantial. Engine reports approximately $2 million in annual cost savings attributable to Eva, alongside a customer satisfaction score improvement from 3.7 to 4.2 on a five-point scale — an increase Salvaggio described as &quot;really cool to see.&quot;</p><p>&quot;Our current numbers show $2 million in cost savings that she&#x27;s able to address for us,&quot; Salvaggio said. &quot;We&#x27;ve seen CSAT go up with Eva. We&#x27;ve been able to go from like a 3.7 out of five scale to 4.2. We&#x27;ve had some moments at 85%.&quot;</p><p>Perhaps more telling than the cost savings is Engine&#x27;s philosophy around AI deployment. Rather than viewing Agentforce as a headcount-reduction tool, Salvaggio said the company focuses on productivity and customer experience improvements.</p><p>&quot;When you hear some companies talk about AI, it&#x27;s all about, &#x27;How do I get rid of all my employees?&#x27;&quot; Salvaggio said. &quot;Our approach is different. If we can avoid adding headcount, that&#x27;s a win. But we&#x27;re really focused on how to create a better customer experience.&quot;</p><p>Engine has since expanded beyond its initial cancellation use case. The company now operates multiple AI agents — including IT, HR, product, and finance assistants deployed through Slack — that Salvaggio collectively refers to as &quot;multi-purpose admin&quot; agents.</p><h2><b>Williams-Sonoma is using AI agents to recreate the in-store shopping experience online</b></h2><p>Williams-Sonoma&#x27;s AI deployment illustrates a more ambitious vision: using AI agents not merely to reduce costs but to fundamentally reimagine how customers interact with brands digitally.</p><p>Hasan described a frustration that anyone who has used e-commerce over the past two decades will recognize. Traditional chatbots feel robotic, impersonal, and limited — good at answering simple questions but incapable of the nuanced guidance a knowledgeable store associate might provide.</p><p>&quot;We&#x27;ve all had experiences with chatbots, and more often than not, they&#x27;re not positive,&quot; Hasan said. &quot;Historically, chatbot capabilities have been pretty basic. But when customers come to us with a service question, it&#x27;s rarely that simple — &#x27;Where&#x27;s my order?&#x27; &#x27;It&#x27;s here.&#x27; &#x27;Great, thanks.&#x27; It&#x27;s far more nuanced and complex.&quot;</p><p>Williams-Sonoma&#x27;s AI agent, called Olive, goes beyond answering questions to actively engaging customers in conversations about entertaining, cooking, and lifestyle — the same consultative approach the company&#x27;s in-store associates have provided for decades.</p><p>&quot;What separates our brands from others in the industry—and certainly from the marketplaces—is that we&#x27;re not just here to sell you a product,&quot; Hasan said. &quot;We&#x27;re here to help you, educate you, elevate your life. With Olive, we can connect the dots.&quot;</p><p>The agent draws on Williams-Sonoma&#x27;s proprietary recipe database, product expertise, and customer data to provide personalized recommendations. A customer planning a dinner party might receive not just product suggestions but complete menu ideas, cooking techniques, and entertaining tips.</p><p>Thattai, the Salesforce AI executive, said Williams-Sonoma is in what he describes as the second stage of agentic AI maturity. The first stage involves simple question-and-answer interactions. The second involves agents that actually execute business processes. The third — which he said is the largest untapped opportunity — involves agents working proactively in the background.</p><p>Critically, Hasan said Williams-Sonoma does not attempt to disguise its AI agents as human. Customers know they&#x27;re interacting with AI.</p><p>&quot;We don&#x27;t try to hide it,&quot; Hasan said. &quot;We know customers may come in with preconceptions. I&#x27;m sure plenty of people are rolling their eyes thinking, &#x27;I have to deal with this AI thing&#x27;—because their experience with other companies has been that it&#x27;s a cost-cutting maneuver that creates friction.&quot;</p><p>The company surveys customers after AI interactions and benchmarks satisfaction against human-assisted interactions. According to Hasan, the AI now matches human benchmarks — a constraint the company refuses to compromise.</p><p>&quot;We have a high bar for service—a white-glove customer experience,&quot; Hasan said. &quot;AI has to at least maintain that bar. If anything, our goal is to raise it.&quot;</p><p><a href="https://www.williams-sonoma.com/">Williams-Sonoma</a> moved from pilot to full production in 28 days, according to Salesforce — a timeline that Thattai said demonstrates how quickly companies can deploy when they build on existing platform infrastructure rather than starting from scratch.</p><h2><b>The three stages of enterprise AI maturity that determine whether companies see ROI</b></h2><p>Beyond the headline customer statistics, Thattai outlined a three-stage maturity framework that he said describes how most enterprises approach agentic AI:</p><p>Stage one involves building simple agents that answer questions — essentially sophisticated chatbots that can access company data to provide accurate, contextual responses. The primary challenge at this stage is ensuring the agent has comprehensive access to relevant information.</p><p>Stage two involves agents that execute workflows — not just answering &quot;what time does my flight leave?&quot; but actually rebooking a flight when a customer asks. Thattai cited Adecco, the recruiting company, as an example of stage-two deployment. The company uses Agentforce to qualify job candidates and match them with roles — a process that involves roughly 30 discrete steps, conditional decisions, and interactions with multiple systems.</p><p>&quot;A large language model by itself can&#x27;t execute a process that complex, because some steps are deterministic and need to run with certainty,&quot; Thattai explained. &quot;Our hybrid reasoning engine uses LLMs for decision-making and reasoning, while ensuring the deterministic steps execute with precision.&quot;</p><p>Stage three — and the one Thattai described as the largest future opportunity — involves agents working proactively in the background without customer initiation. He described a scenario in which a company might have thousands of sales leads sitting in a database, far more than human sales representatives could ever contact individually.</p><p>&quot;Most companies don&#x27;t have the bandwidth to reach out and qualify every one of those customers,&quot; Thattai said. &quot;But if you use an agent to refine profiles and personalize outreach, you&#x27;re creating incremental opportunities that humans simply don&#x27;t have the capacity for.&quot;</p><h2><b>Salesforce edges out Microsoft in analyst rankings of enterprise AI platforms</b></h2><p>The Futurum Group&#x27;s <a href="https://futurumgroup.com/insights/agentic-ai-platforms-for-enterprise-futurum-signal/">recent analysis</a> of agentic AI platforms placed Salesforce at the top of its rankings, slightly ahead of Microsoft. The report evaluated ten major platforms — including offerings from <a href="https://aws.amazon.com/">AWS</a>, <a href="https://www.google.com/">Google</a>, <a href="https://www.ibm.com/us-en">IBM</a>, <a href="https://www.oracle.com/">Oracle</a>, <a href="https://www.sap.com/index.html">SAP</a>, <a href="https://www.servicenow.com/">ServiceNow</a>, and <a href="https://www.uipath.com/">UiPath</a> — across five dimensions: business value, product innovation, strategic vision, go-to-market execution, and ecosystem alignment.</p><p>Salesforce scored above 90 (out of 100) across all five categories, placing it in what the firm calls the &quot;Elite&quot; zone. Microsoft trailed closely behind, with both companies significantly outpacing competitors.</p><p>Thattai acknowledged the competitive pressure but argued that Salesforce&#x27;s existing position in customer relationship management provides structural advantages that pure-play AI companies cannot easily replicate.</p><p>&quot;The richest and most critical data a company has — data about their customers — lives within Salesforce,&quot; Thattai said. &quot;Most of our large customers use us for multiple functions: sales, service, and marketing. That complete view of the customer is central to running any business.&quot;</p><p>The platform advantage extends beyond data. Salesforce&#x27;s existing workflow infrastructure means that AI agents can immediately access business processes that have already been defined and refined — a head start that requires years for competitors to match.</p><p>&quot;Salesforce is not just a place where critical data is put, which it is, but it&#x27;s also where work is performed,&quot; Thattai said. &quot;The process by which a business runs happens in this application — how a sales process is managed, how a marketing process is managed, how a customer service process is managed.&quot;</p><h2><b>Why analysts say 2026 will be the real year of AI agents in the enterprise</b></h2><p>Despite the momentum, both Salesforce executives and independent analysts cautioned that <a href="https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/blogs/pulse-check-series-latest-ai-developments/ai-adoption-challenges-ai-trends.html">enterprise AI remains in early innings</a>.</p><p>Hinchcliffe pushed back against the notion that 2025 was &quot;the year of agents,&quot; a phrase that circulated widely at the beginning of the year.</p><p>&quot;This was not the year of agents,&quot; Hinchcliffe said. &quot;This was the year of finding out how ready they were, learning the platforms, and discovering where they weren&#x27;t mature yet. The biggest complaint we heard was that there&#x27;s no easy way to manage them. Once companies got all these agents running, they realized: I have to do lifecycle management. I have agents running on old versions, but their processes aren&#x27;t finished. How do I migrate them?&quot;</p><p>He predicted 2026 has &quot;a much more likely chance of being the year of agents,&quot; though added that the &quot;biggest year of agents&quot; is &quot;probably going to be the year after that.&quot;</p><p>The <a href="https://futurumgroup.com/insights/agentic-ai-platforms-for-enterprise-futurum-signal/">Futurum Group&#x27;s analysis forecasts</a> the AI platform market growing from $127 billion in 2024 to $440 billion by 2029 — a compound annual growth rate that dwarfs most enterprise software categories.</p><p>For companies still on the sidelines, Salvaggio offered pointed advice based on Engine&#x27;s early-adopter experience.</p><p>&quot;Don&#x27;t take the fast-follower strategy with this technology,&quot; he said. &quot;It feels like it&#x27;s changing every week. There&#x27;s a differentiation period coming — if it hasn&#x27;t started already — and companies that waited are going to fall behind those that moved early.&quot;</p><p>He warned that institutional knowledge about AI deployment is becoming a competitive asset in itself — expertise that cannot be quickly acquired through outside consultants.</p><p>&quot;Companies need to start building AI expertise into their employee base,&quot; Salvaggio said. &quot;You can&#x27;t outsource all of this — you need that institutional knowledge within your organization.&quot;</p><p>Thattai struck a similarly forward-looking note, drawing parallels to previous platform shifts.</p><p>&quot;Think about the wave of mobile technology—apps that created entirely new ways of interacting with companies,&quot; he said. &quot;You&#x27;re going to see that happen with agentic technology. The difference is it will span every channel — voice, chat, mobile, web, text — all tied together by a personalized conversational experience.&quot;</p><p>The question for enterprises is no longer whether AI agents will transform customer and employee experiences. The data from Salesforce&#x27;s customer base suggests that transformation is already underway, generating measurable returns for early adopters willing to invest in platform infrastructure rather than waiting for a theoretical bubble to burst.</p><p>&quot;I feel incredibly confident that point solutions in each of those areas are not the path to getting to an agentic enterprise,&quot; Thattai said. &quot;The platform approach that we&#x27;ve taken to unlock all of this data in this context is really the way that customers are going to get value.&quot;</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Technology</category>
            <category>Business</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/7zajtt3vd53mg2vXUzTqcW/456bf7f157c66b9617825f5be3dc9ff9/nuneybits_Vector_art_of_red_line_rising._7091569a-956d-4255-a315-4d0377c47a16.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Software commands 40% of cybersecurity budgets as gen AI attacks execute in milliseconds]]></title>
            <link>https://venturebeat.com/business/software-is-40-of-security-budgets-as-cisos-shift-to-ai-defense</link>
            <guid isPermaLink="false">wp-3016037</guid>
            <pubDate>Sat, 30 Aug 2025 01:06:26 GMT</pubDate>
            <description><![CDATA[<p>&quot;With volatility now the norm, security and risk leaders need practical guidance on managing existing spending and new budgetary necessities,&quot; states <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">Forrester&#x27;s 2026 Budget Planning Guide</a>, revealing a fundamental shift in how organizations allocate cybersecurity resources.</p><p>Software now commands <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">40%</a> of cybersecurity spending, exceeding hardware at <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">15.8%</a>, outsourcing at <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">15%</a> and surpassing personnel costs at <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">29%</a> by <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">11 percentage points</a> while organizations defend against gen AI attacks executing in milliseconds versus a Mean Time to Identify (MTTI) of <a href="https://www.ibm.com/reports/data-breach">181 days</a> according to <a href="https://www.ibm.com/reports/data-breach">IBM’s</a> latest <a href="https://www.ibm.com/reports/data-breach">Cost of a Data Breach Report</a>.</p><p>Three converging threats are flipping cybersecurity on its head: what once protected organizations is now working against them. Generative AI (gen AI) is enabling attackers to craft 10,000 personalized phishing emails per minute using scraped LinkedIn profiles and corporate communications. <a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards">NIST&#x27;s 2030 quantum deadline </a>threatens retroactive decryption of<a href="https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards"> $425 billion</a> in currently protected data. Deepfake fraud that <a href="https://venturebeat.com/security/deepfakes-will-cost-40-billion-by-2027-as-adversarial-ai-gains-momentum/">surged 3,000% in 2024</a> now bypasses biometric authentication in <a href="https://venturebeat.com/security/deepfakes-will-cost-40-billion-by-2027-as-adversarial-ai-gains-momentum/">97%</a> of attempts, forcing security leaders to reimagine defensive architectures fundamentally.</p><p>Caption: Software now commands 40% of cybersecurity budgets in 2025, representing an 11 percentage point premium over personnel costs at 29%, as organizations layer security solutions to combat gen AI threats executing in milliseconds. Source: <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">Forrester&#x27;s 2026 Budget Planning Guide</a></p><h2>Platform consolidation is eliminating an $18 million integration tax as 75-tool sprawl collapses</h2><p>Enterprise security teams managing 75 or more tools lose <a href="https://www.itpro.com/security/cybersecurity-teams-are-wasting-time-money-and-effort-dealing-with-tool-sprawl-and-multi-vendor-ecosystems">$18 million annually</a> to integration and overhead alone. The average detection time remains <a href="https://services.google.com/fh/files/misc/m-trends-2024.pdf">277 days</a>, while attacks execute within milliseconds.</p><p><a href="https://www.gartner.com/en/documents/4008777">Gartner forecasts</a> that interactive application security testing (IAST) tools will lose 80% of market share by 2026. Security Service Edge (SSE) platforms that promised streamlined convergence now <a href="https://fedscoop.com/report-highlights-advances-in-sse-converged-cloud-security/">add to the complexity</a> they intended to solve. Meanwhile, standalone risk-rating products flood security operations centers with alerts that lack actionable context, leading analysts to spend <a href="https://www.helpnetsecurity.com/2023/07/20/soc-analysts-tools-effectiveness/">67% of their time on false positives</a>, according to IDC’s Security Operations Study.</p><p>The operational math doesn’t work. Analysts require<a href="https://www.securitymagazine.com/articles/99674-90-of-soc-analysts-believe-current-threat-detection-tools-are-effective"> 90 seconds</a> to evaluate each alert, but they receive <a href="https://securityboulevard.com/2025/05/why-your-security-team-is-wasting-70-of-their-time-on-phantom-threats-and-how-to-fix-it/">11,000 alerts daily</a>. Each additional security tool deployed reduces visibility by 12% and increases attacker dwell time by <a href="https://services.google.com/fh/files/misc/m-trends-2024.pdf">23 days</a>, as reported in <a href="https://services.google.com/fh/files/misc/m-trends-2024.pdf">Mandiant’s 2024 M-Trends Report</a>. Complexity itself has become the enterprise’s greatest cybersecurity vulnerability.</p><p>Platform vendors have been selling consolidation for years, capitalizing on the chaos and complexity that app and tool sprawl create. As George Kurtz, CEO of <a href="https://www.crowdstrike.com/en-us/platform/next-gen-identity-security/">CrowdStrike</a>, explained in a recent <a href="https://venturebeat.com/security/platform-versus-platformization-george-kurtz-on-why-crowdstrike-is-winning-the-platform-battle/">VentureBeat interview</a> about competing with a platform in today’s mercurially changing market conditions: &quot;The difference between a platform and platformization is execution. You need to deliver immediate value while building toward a unified vision that eliminates complexity.&quot;</p><p>CrowdStrike’s Charlotte AI <a href="https://venturebeat.com/security/crowdstrikes-ai-slashes-soc-workloads-over-40-hours-a-week/">automates alert triage</a> and saves SOC teams over <a href="https://venturebeat.com/security/crowdstrikes-ai-slashes-soc-workloads-over-40-hours-a-week/">40 hours every week</a> by classifying millions of detections at <a href="https://venturebeat.com/security/crowdstrikes-ai-slashes-soc-workloads-over-40-hours-a-week/">98%</a> accuracy; that equals the output of five seasoned analysts and is fueled by Falcon Complete’s expert-labeled incident corpus.</p><p>&quot;We couldn&#x27;t have done this without our Falcon Complete team,&quot; Elia Zaitsev, CTO at CrowdStrike, told <a href="https://venturebeat.com/security/crowdstrikes-ai-slashes-soc-workloads-over-40-hours-a-week/">VentureBeat</a> in a recent interview. &quot;They do triage as part of their workflow, manually handling millions of detections. That high-quality, human-annotated dataset is what made over 98% accuracy possible. We recognized that adversaries are increasingly leveraging AI to accelerate attacks. With Charlotte AI, we&#x27;re giving defenders an equal footing, amplifying their efficiency and ensuring they can keep pace with attackers in real time.&quot;</p><p><a href="https://www.crowdstrike.com/">CrowdStrike</a>, <a href="https://www.microsoft.com/en-us/security/business/siem-and-xdr/microsoft-defender-xdr">Microsoft&#x27;s Defender XDR</a> with <a href="https://www.microsoft.com/en-us/security/business/endpoint-management/microsoft-intune">MDVM/Intune</a>, <a href="https://www.paloaltonetworks.com/">Palo Alto Networks</a>, <a href="https://www.netskope.com/">Netskope</a>, <a href="https://www.tanium.com/">Tanium</a> and <a href="https://mondoo.com/">Mondoo</a> now bundle <a href="https://www.gartner.com/en/information-technology/glossary/extended-detection-and-response-xdr">XDR</a>, <a href="https://www.gartner.com/en/information-technology/glossary/security-information-and-event-management-siem">SIEM</a> and auto-remediation, transforming <a href="https://www.gartner.com/en/information-technology/glossary/security-operations-center-soc">S</a><a href="https://www.gartner.com/en/information-technology/glossary/security-operations-center-soc">OCs</a> from delayed forensics sessions to the ability to perform real-time threat neutralization.</p><h2>Security budgets surge 10% as gen AI attacks outpace human defense</h2><p><a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">Forrester’s guide</a> finds <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES180229">55%</a> of global security technology decision-makers expect significant budget increases in the next 12 months. <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">15%</a> anticipate jumps exceeding 10% while <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">40%</a> expect increases between 5% and 10%. This spending surge reflects an asymmetric battlefield where attackers deploy gen AI to simultaneously target thousands of employees with personalized campaigns crafted from real-time scraped data.</p><p>Attackers are making the most of the advantages they’re getting from adversarial AI, with speed, stealth and highly personalized, target attacks becoming the most lethal. &quot;For years, attackers have been utilizing AI to their advantage,&quot; Mike Riemer, Field CISO at <a href="https://www.ivanti.com/">Ivanti</a>, told <a href="https://venturebeat.com/security/5-ways-gen-ai-will-impact-cybersecurity-in-2025/">VentureBeat</a>. &quot;However, 2025 will mark a turning point as defenders begin to harness the full potential of AI for cybersecurity purposes.&quot;</p><p>Caption: 55% of security leaders expect budget increases above 5% in 2026, with Asia Pacific organizations leading at 22% expecting increases above 10% versus just 9% in North America. Source: <a href="https://www.forrester.com/bold/planning-guide-2026-security-risk/">Forrester&#x27;s 2026 Budget Planning Guide</a></p><p>Regional spending disparities reveal threat landscape variations and how CISOs are responding to them. Asia Pacific organizations lead with <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">22%</a> expecting budget increases above 10% versus just <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">9%</a> in North America. <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">Cloud security, on-premises technology and security awareness training</a> top investment priorities globally.</p><h2>Software dominates budgets as runtime defenses become critical in 2026</h2><p>VentureBeat continues to hear from security leaders about how crucial protecting the inference layer of AI model development is. Many consider it the new frontline of the future of cybersecurity. Inference layers are vulnerable to prompt injection, data exfiltration, or even direct model manipulation. These are all threats that demand millisecond-scale responses, not delayed forensic investigations.</p><p>Forrester’s latest CISO spending guide underscores a profound shift in cybersecurity spending priorities, with cloud security leading all spending increases at <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">12%</a>, closely followed by investments in on-premises security technology at <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">11%</a>, and security awareness initiatives at <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">10%</a>. These priorities reflect the urgency CISOs feel to strengthen defenses precisely at the critical moment of AI model inference.</p><p>“At Reputation, security is baked into our core architecture and enforced rigorously at runtime,” Carter Rees, Vice President of Artificial Intelligence at <a href="https://reputation.com/">Reputation</a>, recently told VentureBeat. “The inference layer, the exact moment an AI model interacts with people, data, or tools, is where we apply our most stringent controls. Every interaction includes authenticated tenant and role contexts, verified in real-time by an AI security gateway.”</p><p>Reputation’s multi-tiered approach has become a de facto gold standard, blending proactive and reactive defenses. “Real-time controls immediately take over,” Rees explained. “Our prompt firewall blocks unauthorized or off-topic inputs instantly, restricting tool and data access strictly to user permissions. Behavioral detectors proactively flag anomalies the moment they occur.”</p><p>This rigorous runtime security approach extends equally into customer-facing systems. “For natural language interactions, our AI only pulls from explicitly customer-approved sources,” Rees noted. “Each generated response must transparently cite its sources. We verify citations match both tenant and context, routing for human review if they do not.”</p><h2>Quantum computing’s accelerating risk</h2><p>Quantum computing is quickly evolving from a theoretical concern into an immediate enterprise threat. Security leaders now face “harvest now, decrypt later” (HNDL) attacks, where adversaries store encrypted data for future quantum-enabled decryption. Widely used encryption methods like 2048-bit RSA risk compromise once quantum processors reach operational scale with tens of thousands of reliable qubits.</p><p>The <a href="https://csrc.nist.gov/Projects/post-quantum-cryptography">National Institute of Standards and Technology (NIST)</a> finalized three critical Post-Quantum Cryptography (PQC) standards in August 2024, mandating encryption algorithm retirement by 2030 and full prohibition by 2035. Global agencies, including <a href="https://www.cyber.gov.au/resources-business-and-government/essential-cyber-security/post-quantum-cryptography">Australia’s Signals Directorate</a>, require PQC implementation by 2030.</p><p>Forrester urges organizations to prioritize PQC adoption for protecting sensitive data at rest, in transit, and in use. Security leaders should leverage cryptographic inventory and discovery tools, partnering with cryptoagility providers such as <a href="https://www.entrust.com/solutions/cryptographic-agility">Entrust</a>, <a href="https://www.ibm.com/quantum/quantum-safe">IBM</a>, <a href="https://www.keyfactor.com/platform/crypto-agility/">Keyfactor</a>, <a href="https://www.paloaltonetworks.com/network-security/post-quantum-cryptography">Palo Alto Networks</a>, <a href="https://www.qusecure.com/solutions/quantum-safe-solutions">QuSecure</a>, <a href="https://www.sandboxaq.com/solutions/post-quantum-cryptography">SandboxAQ</a>, and <a href="https://cpl.thalesgroup.com/encryption/crypto-agility-post-quantum-security">Thales</a>. Given quantum’s rapid progression, CISOs need to factor in how they’ll update encryption strategies to avoid obsolescence and vulnerability.</p><h2>Explosion of identities is fueling an AI-driven credential crisis</h2><p>Machine identities now outnumber human users by a staggering <a href="https://www.venafi.com/blog/explosive-growth-machine-identities-weakens-cybersecurity">45:1 ratio</a>, fueling a credential crisis beyond human management. Forrester’s <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">guide </a>underscores scaling <a href="https://www.forrester.com/report/budget-planning-guide-2026-security-and-risk/RES182875">machine identity management</a> as mission-critical to mitigating emerging threats. Gartner forecasts identity security spending to nearly double, reaching <a href="https://www.gartner.com/en/documents/4019351">$47.1 billion by 2028</a>.</p><p>Traditional endpoint approaches aren’t capable of slowing down a growing onslaught of adversarial AI attacks. Ivanti’s Daren Goeson recently told VentureBeat: “As these endpoints multiply, so does their vulnerability. Combining AI with <a href="https://www.ivanti.com/products/unified-endpoint-manager">Unified Endpoint Management</a> (UEM) is increasingly essential.” <a href="https://www.ivanti.com/">Ivanti’s</a> AI-driven<a href="https://help.ivanti.com/iv/help/en_US/RS/vNow/How-Vulnerability-Risk-Ratings-VRR-Are-Used.htm"> Vulnerability Risk Rating (VRR)</a> illustrates this benefit, enabling organizations to patch vulnerabilities 85% faster by identifying threats traditional scoring methods overlook, making AI-driven credential intelligence enterprise security at scale.</p><p>&quot;Endpoint devices such as laptops, desktops, smartphones, and IoT devices are essential to modern business operations. However, as their numbers grow, so do the opportunities for attackers to exploit endpoints and their applications, ”Goeson explained.  “Factors like an expanded attack surface, insufficient security resources, unpatched vulnerabilities, and outdated software contribute to this rising risk. By adopting a comprehensive approach that combines UEM solutions with AI-powered tools, businesses significantly reduce their cyber risk and the impact of attacks,&quot; Goeson advised VentureBeat during a recent interview.</p><h2>Divesting legacy tools continues to accelerate</h2><p>Forrester saves their immediate call to action in the guide for advising security leaders to begin divesting legacy security tools immediately, with a specific focus on interactive application security testing (IAST), standalone cybersecurity risk-rating (CRR) products, and fragmented Security Service Edge (SSE), SD-WAN, and Zero Trust Network Access (ZTNA) solutions.</p><p>Instead, Forrester advises, security leaders need to prioritize more integrated platforms that enhance visibility and streamline management. Unified Secure Access Service Edge (<a href="https://www.paloaltonetworks.com/cyberpedia/what-is-sase">SASE</a>) solutions from <a href="https://www.paloaltonetworks.com/sase">Palo Alto Networks</a> and <a href="https://www.netskope.com/">Netskope</a> now provide essential consolidation. At the same time, integrated Third-Party Risk Management (<a href="https://www.ibm.com/think/topics/third-party-risk-management">TPRM</a>) and continuous monitoring platforms from <a href="https://www.upguard.com/">UpGuard</a>, <a href="https://panorays.com/">Panorays</a> and <a href="https://www.riskrecon.com/">RiskRecon</a> replace standalone CRR tools the consulting firm advises.</p><p>Additionally, automated remediation powered by <a href="https://www.microsoft.com/security/blog/2022/11/02/microsoft-defender-vulnerability-management-now-generally-available/">Microsoft’s MDVM</a> with <a href="https://www.microsoft.com/en/security/business/microsoft-intune">Intune</a>, <a href="https://www.tanium.com/platform/">Tanium’s endpoint management</a>, and DevOps-focused solutions like <a href="https://mondoo.com/">Mondoo</a> has emerged as a critical capability for real-time threat neutralization.</p><h2>CISOs must consolidate security at AI’s inference edge or risk losing control</h2><p>Consolidating tools at inference’s edge is the future of cybersecurity, especially as AI threats intensify. “For CISOs, the playbook is crystal clear,” Rees concluded. “Consolidate controls decisively at the inference edge. Introduce robust behavioral anomaly detection. Strengthen Retrieval-Augmented Generation (RAG) systems with provenance checks and defined abstain paths. Above all, invest heavily in runtime defenses and support the specialized teams who operate them. Execute this playbook, and you achieve secure AI deployments at true scale.”</p><p></p>]]></description>
            <author>louiswcolumbus@gmail.com (Louis Columbus)</author>
            <category>Business</category>
            <category>Security</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/CfRgh6sqx8GWkn45L5M5I/5ceff5d44c23301d4d025ea724f98bc2/soc-for-budget-center.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How Intuit killed the chatbot crutch – and built an agentic AI playbook you can copy]]></title>
            <link>https://venturebeat.com/business/how-intuit-killed-the-chatbot-crutch-and-built-an-agentic-ai-playbook-you-can-copy</link>
            <guid isPermaLink="false">wp-3016251</guid>
            <pubDate>Fri, 29 Aug 2025 18:22:47 GMT</pubDate>
            <description><![CDATA[<p>In the frenzied land rush for generative AI that followed ChatGPT’s debut, the mandate from Intuit&#x27;s CEO was clear: ship the company&#x27;s largest, most shocking AI-driven launch by Sept. 2023.</p><p>Responding with blazing speed, the $200 billion company behind QuickBooks, TurboTax, and Mailchimp, delivered Intuit Assist, the company&#x27;s new generative AI assistant. Its most prominent feature was a classic first attempt: a chat-style interface bolted onto the side of its applications, designed to prove Intuit was on the cutting edge.</p><p>It was supposed to be a game-changer. Instead, it flopped.</p><p>“When you take a beautiful, well-designed user interface and you simply plop human-like chat on the side, that doesn&#x27;t necessarily make it better,” Alex Balazs, <a href="https://www.intuit.com/">Intuit</a>’s Chief Technology Officer, told VentureBeat.</p><p>The feature&#x27;s failure, particularly within Quickbooks, plunged the company into what Dave Talach, SVP of the QuickBooks team, calls the &quot;trough of disillusionment.&quot; The assistant&#x27;s chat feature took up valuable screen space and created confusion. “There was a blinking cursor. We almost put a cognitive burden on people, like, what can it do? Can I trust it?” Talach recalls. The pressure was palpable; he had to present to Intuit&#x27;s Board of Directors to explain what went wrong and what the team had learned.</p><p>What followed was not a minor course correction, but a grueling nine-month pivot to &quot;burn the boats&quot; and reinvent how the 40-year-old giant builds products. This is the inside story of how Intuit emerged with a real-world playbook for enterprise AI that other leaders can follow.</p><h2>How a split-screen observation sparked Intuit’s AI pivot</h2><p>The pivot in the company&#x27;s AI strategy began by observing customers as they did their work. Talach recalls his team&#x27;s &quot;big aha moment&quot; when they noticed QuickBooks users manually transcribing invoices with a &quot;split screen&quot; – an email open on one side of their monitor, QuickBooks on the other.</p><p>Why force a human to be a copy-paste machine when an AI could ingest data from the email and populate the invoice automatically? This observation sparked a new mission: stop trying to invent new behaviors with chat and instead find and eliminate &quot;manual toil&quot; within existing customer workflows.</p><p>Recognizing this bottom-up momentum, CTO Alex Balazs and Marianna Tessel, GM of the business group, made their move. “We need to make a declaration together,” Balazs recalls Tessel saying. The only path forward was a full commitment to an AI-native future. &quot;It’s burning the boats, and it&#x27;s only going to be the AI way.”</p><p>To execute this, management redeployed a key technology leader, Clarence Huang, from the core tech team and &quot;parachuted&quot; him into the heart of the QuickBooks business. His mission was to scale a &quot;builder-centric mindset&quot; of rapid, customer-focused prototyping.</p><p>Embracing this new model also meant dismantling the old one. To empower smaller, faster teams, the company made a difficult decision: it slashed layers of middle management, <a href="https://fortune.com/2024/07/10/intuit-layoffs-email-hiring-ai-transformation/">letting go of 1,800 employees</a> in 2024 in roles no longer aligned with new priorities, while <a href="https://www.intuit.com/blog/news-social/investing-in-our-future/">pledging to hire back about 1,800 new employees</a> with skills in engineering, product and other customer-facing roles.</p><h2>The three-pillar framework that turned AI failure into enterprise success</h2><p>Intuit&#x27;s transformation required a new operating model built on three core changes: empowering its people, re-engineering its processes, and building a technology engine for speed.</p><h3>Pillar 1: Forge a &#x27;Builder Culture&#x27;</h3><p>To execute the pivot, Intuit first had to get the right people in the right structure and empower them to work in entirely new ways.</p><ul><li><p><b>Aggressive Talent Acquisition:</b> The company hired aggressively to add to its core AI team, bringing it to several hundred today, from just 30 people in 2017 <b>–</b> accelerating over the past two years by poaching top-tier AI leaders from giants like Uber, Twitter and Bytedance.</p></li><li><p><b>New Team Structures:</b> The core of the new model was small, empowered, cross-functional teams. These groups, sometimes including members from up to 10 different units <b>–</b> data science, research, product, design, engineering, and more <b>–</b> focused solely on delivering a specific agentic experience. To enable this, managers ruthlessly prioritized, eliminating any tasks that weren&#x27;t among the top three priorities. &quot;That ruthless prioritization... was really, really important,&quot; Huang said.</p></li><li><p><b>Empowered Ways of Working:</b> Within these teams, traditional job descriptions dissolved in what Huang calls a &quot;smearing&quot; of roles. Everyone was expected to talk with customers. Huang kept his own spreadsheet of 30 customer names he called regularly. The transformation was profound, exemplified by data scientist Byron Tang, who stunned colleagues by using new AI &quot;vibe-coding&quot; tools to build a full prototype with a beautiful UI single-handedly. Huang recalls his reaction: “Oh my god... you are the renaissance man. You got it all!”</p></li></ul><h3>Pillar 2: High-Velocity Iteration Over Bureaucracy</h3><p>With the right people in place, Intuit systematically dismantled the processes that slow large companies, replacing them with a system built for speed and customer obsession.</p><ul><li><p><b>Prototype-Driven Development:</b> The old way of using spec docs was replaced by a new mantra: a prototype is worth 10,000 words. Teams began shipping functional prototypes to customers almost immediately. “We&#x27;ll literally show a working, functioning prototype to the customer... and we&#x27;ll vibe code it on the spot,&quot; Huang explains. &quot;The reaction on their faces is just magic.”</p></li><li><p><b>Customer-Centric Design:</b> This rapid feedback loop led to key innovations, including a &quot;Slider of Autonomy,&quot; a concept <a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ">popularized by developer Andrej Karpathy</a> in June. Intuit noticed that customers feared features that seemed &quot;too magical,&quot; so it gave them control over the level of AI intervention, ranging from full automation to manual review <b>–</b> creating a &quot;smooth onramp&quot; to trusting the agents. For example, in Intuit’s QuickBooks accounting agent, users can click a button to allow the agent to post all transactions it recommends. But if users want to maintain more control, they can use icons to see the entire reasoning chain of the agent for user-friendly explanations.</p></li><li><p><b>Ruthless Bureaucracy Busting:</b> Leadership actively cut red tape. They implemented a &quot;no meetings on Tuesdays&quot; rule on the platform team, banned afternoon meetings for individual contributors in the business unit, and instituted a formal &quot;friction busting&quot; campaign, imposing a seven-day deadline for leaders to unblock any inter-team disagreements. A rule limiting AI rollouts to a small number of customers for experimentation was revised to allow for tests involving up to 1,000 customers at once, up from the original limit of just 10.</p></li></ul><h3>Pillar 3: Build an Engine for Speed</h3><p>Underpinning the entire effort is <b>GenOS</b>, Intuit&#x27;s internal AI platform. It flowed from CDO Ashok Srivastava’s desire to <a href="https://venturebeat.com/ai/inside-the-race-to-build-an-operating-system-for-generative-ai/">democratize AI access across the company</a>.</p><p>Instead of a slow, top-down build, the platform evolved at the same speed that the business grew, through a strategy CTO Balazs calls &quot;Fast Follow Harvesting.&quot; As customer-facing teams built agents, they would identify gaps in the platform. A central team then ran in tandem with the customer teams, closing the gaps with new features.</p><p>A key feature of GenOS was the <b>Agent Starter Kit</b>, which <a href="https://venturebeat.com/ai/inside-intuits-genos-update-why-prompt-optimization-and-intelligent-data-cognition-are-critical-to-enterprise-agentic-ai-success">enabled 900 internal developers</a> to build hundreds of agents within a five-week period. Other features included a runtime orchestration and a governance framework.</p><p>Another core component was an LLM router that provides resilience and allows LLM calls to flow to different models depending on which one is best for the given task. Huang recalls getting a late-night call from Srivastava. &quot;He&#x27;s like, &#x27;OpenAI is down. Are you guys okay?&#x27;&quot; Because the team was on GenOS, &quot;it just auto-switched to the fallback LLM in the gateway... it was okay.&quot;</p><p>This platform allows Intuit to leverage its core differentiator: decades of domain-specific data. By fine-tuning models on a finite set of financial tools and APIs, Intuit’s agents achieve accuracy that general-purpose models can’t. &quot;In all of our internal benchmarks, our stuff just works better for in-domain data,” Huang said.</p><h2>The payoff: 5 days faster payments and 12 hours saved monthly</h2><p>The result of this pivot is a suite of AI agents deeply woven into QuickBooks and increasingly across Intuit&#x27;s other products. The QuickBooks Payments Agent does things like proactively suggest adding late fees if a customer’s payment history shows they&#x27;ve been late in the past. The impact is tangible: Small businesses using the agent <a href="https://venturebeat.com/ai/get-paid-faster-how-intuits-new-ai-agents-help-businesses-get-paid-up-to-5-days-faster-and-save-up-to-12-hours-a-month-with-autonomous-workflows/">get paid, on average, five days faster</a>, are <a href="https://investors.intuit.com/_assets/_2f598d21c58b80e4dd428f690d6aac67/intuit/db/946/10275/webcast_transcript/Q3+FY25+Earnings+Script.pdf">10 percent more likely to get paid on overdue invoices, and save up to 12 hours a month</a>.</p><p>The Customer Agent transforms QuickBooks into a lightweight CRM, scanning connected Gmail accounts for leads, while the Accounting Agent automates transaction categorization and flags anomalies. Today, these &quot;virtual employees,&quot; as Talach calls them, surface their work through tiles in the QuickBooks &quot;business feed,&quot; turning the dashboard into an active, collaborative space. These translate into more holistic offerings for customers, and could help Intuit take market share from competitors who offer similar services, such as HubSpot. </p><p>In last week&#x27;s <a href="https://investors.intuit.com/_assets/_db3610d778294654d0ad9b6a3c41340e/intuit/db/946/10311/webcast_transcript/Q4+FY25+Earnings+Script+%281%29.pdf">quarterly earnings call</a>, CEO Sasan Goodarzi credited the company&#x27;s strong results, 16 percent growth for the full year <b>–</b> to its investments in AI. He said the agent launch was already bearing fruit: &quot;We&#x27;re seeing strong traction since last month, with customer engagement in the millions and repeat usage rates significantly above our expectations.&quot;</p><p>Intuit is now applying this playbook to bigger challenges, recently announcing <a href="https://venturebeat.com/ai/intuit-brings-agentic-ai-to-the-mid-market-saving-organizations-17-to-20-hours-a-month/">agents for mid-market companies</a> with up to $100 million in revenue – a significant expansion from Intuit&#x27;s traditional base of customers with $5 million or less in revenue. The logic is simple: Bigger customers have more complex workflows, and thus a greater need for AI agents.</p><p>For enterprise leaders navigating their own AI transformations, Intuit&#x27;s story offers a clear roadmap. The initial stumbles aren&#x27;t just common – they may be necessary. The path forward is more than integrating AI magic. It&#x27;s about dismantling old ways of working and building a culture, process and platform that lets established companies move with startup speed while following AI-age best practices.</p><p>The biggest lesson? Start with the work your customers actually do, not the technology you want to deploy.<!-- -->
</p>]]></description>
            <author>mmarshall@venturebeat.com (Matt Marshall)</author>
            <category>Business</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/3PpId4EqlqGTkTRx9TfUvc/b8dac789bf1d19d4a41a2fd1a55e6785/u5875285552_imagine_abstract_depiction_of_a_major_corporation_e00cc4c7-e01d-4491-9a61-f0dfb16f740e_0.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Nvidia’s $46.7B Q2 proves the platform, but its next fight is ASIC economics on inference]]></title>
            <link>https://venturebeat.com/business/nvidias-strong-q2-results-cant-mask-the-asic-challenge-in-their-future</link>
            <guid isPermaLink="false">wp-3016173</guid>
            <pubDate>Thu, 28 Aug 2025 21:09:54 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.nvidia.com/">Nvidia</a> reported $46.7 billion in revenue for fiscal Q2 2026 in their earnings announcement and call yesterday, with data center revenue hitting $41.1 billion, up 56% year over year. The company also released guidance for Q3, predicting a $54 billion quarter.</p><p>Behind these confirmed earnings call numbers lies a more complex story of how custom application-specific integrated circuits (ASICs) are gaining ground in key Nvidia segments and will challenge their growth in the quarters to come.</p><p><a href="https://www.bankofamerica.com/">Bank of America&#x27;s</a> Vivek Arya asked Nvidia’s president and CEO, Jensen Huang, if he saw any scenario where ASICs could take market share from Nvidia GPUs. ASICs continue to gain ground on performance and cost advantages over Nvidia, <a href="https://www.broadcom.com/">Broadcom</a> projects 55% to <a href="https://www.ainvest.com/news/broadcom-ai-revenue-expected-surge-60-2025-driven-google-tpu-v6-chip-2506/">60% AI revenue growth</a> next year.</p><p>Huang pushed back hard on the earnings call. He emphasized that building AI infrastructure is &quot;really hard&quot; and most ASIC projects fail to reach production. That’s a fair point, but they have a competitor in Broadcom, which is seeing its AI revenue steadily ramp up, approaching a <a href="https://www.ainvest.com/news/broadcom-ai-revenue-expected-surge-60-2025-driven-google-tpu-v6-chip-2506/">$20 billion annual run rate</a>. Further underscoring the growing competitive fragmentation of the market is how <a href="https://cloud.google.com/">Google</a>, <a href="https://ai.meta.com/">Meta</a> and <a href="https://azure.microsoft.com/">Microsoft</a> all deploy custom silicon at scale. The market has spoken.</p><h2>ASICs are redefining the competitive landscape in real-time</h2><p>Nvidia is more than capable of competing with new ASIC providers. Where they’re running into headwinds is how effectively ASIC competitors are positioning the combination of their use cases, performance claims and cost positions. They’re also looking to differentiate themselves in terms of the level of ecosystem lock-in they require, with Broadcom leading in this competitive dimension.</p><p>The following table compares Nvidia Blackwell with its primary competitors. Real-world results vary significantly depending on specific workloads and deployment configurations:</p><table><tbody><tr><td><p>Primary Use Cases</p></td><td><p>Training, inference, generative AI</p></td><td><p>Hyperscale training &amp; inference</p></td><td><p>AWS-focused training &amp; inference</p></td><td><p>Training, inference, hybrid-cloud deployments</p></td><td><p>AI cluster networking</p></td></tr><tr><td><p>Performance Claims</p></td><td><p>Up to 50x improvement over Hopper*</p></td><td><p>67% improvement TPU v6 vs v5*</p></td><td><p>Comparable GPU performance at lower power*</p></td><td><p>2-4x price-performance vs prior gen*</p></td><td><p>InfiniBand parity on Ethernet*</p></td></tr><tr><td><p>Cost Position</p></td><td><p>Premium pricing, comprehensive ecosystem</p></td><td><p>Significant savings vs GPUs per Google*</p></td><td><p>Aggressive pricing per AWS marketing*</p></td><td><p>Budget alternative positioning*</p></td><td><p>Lower networking TCO per vendor*</p></td></tr><tr><td><p>Ecosystem Lock-In</p></td><td><p>Moderate (CUDA, proprietary)</p></td><td><p>High (Google Cloud, TensorFlow/JAX)</p></td><td><p>High (AWS, proprietary Neuron SDK)</p></td><td><p>Moderate (supports open stack)</p></td><td><p>Low (Ethernet-based standards)</p></td></tr><tr><td><p>Availability</p></td><td><p>Universal (cloud, OEM)</p></td><td><p>Google Cloud-exclusive</p></td><td><p>AWS-exclusive</p></td><td><p>Multiple cloud and on-premise</p></td><td><p>Broadcom direct, OEM integrators</p></td></tr><tr><td><p>Strategic Appeal</p></td><td><p>Proven scale, broad support</p></td><td><p>Cloud workload optimization</p></td><td><p>AWS integration advantages</p></td><td><p>Multi-cloud flexibility</p></td><td><p>Simplified networking</p></td></tr><tr><td><p>Market Position</p></td><td><p>Leadership with margin pressure</p></td><td><p>Growing in specific workloads</p></td><td><p>Expanding within AWS</p></td><td><p>Emerging alternative</p></td><td><p>Infrastructure enabler</p></td></tr></tbody></table><p><i>*Performance-per-watt improvements and cost savings depend on specific workload characteristics, model types, deployment configurations and vendor testing assumptions. Actual results vary significantly by use case.</i></p><h2>Hyperscalers continue building their own paths</h2><p>Every major cloud provider has adopted custom silicon to gain the performance, cost, ecosystem scale and extensive DevOps advantages of defining an ASIC from the ground up. Google operates TPU v6 in production through its partnership with Broadcom. Meta built MTIA chips specifically for ranking and recommendations. Microsoft develops Project Maia for sustainable AI workloads.</p><p>Amazon Web Services encourages customers to use Trainium for training and Inferentia for inference.</p><p>Add to that the fact that ByteDance runs TikTok recommendations on custom silicon despite geopolitical tensions. That&#x27;s billions of inference requests running on ASICs daily, not GPUs.</p><p>CFO Colette Kress acknowledged the competitive reality during the call. She referenced China revenue, saying it had dropped to a low single-digit percentage of data center revenue. Current Q3 guidance excludes H20 shipments to China completely. While Huang’s statements about China’s extensive opportunities tried to steer the earnings call in a positive direction, it was clear that equity analysts weren’t buying all of it.</p><p>The general tone and perspective is that export controls create ongoing uncertainty for Nvidia in a market that arguably represents its second most significant growth opportunity. Huang said that 50% of all AI researchers are in China and he is fully committed to serving that market.   </p><h2>Nvidia&#x27;s platform advantage is one of their greatest strengths</h2><p>Huang made a valid case for Nvidia&#x27;s integrated approach during the earnings call. Building modern AI requires six different chip types working together, he argued, and that complexity creates barriers competitors struggle to match. Nvidia doesn&#x27;t just ship GPUs anymore, he emphasized multiple times on the earnings call. The company delivers a complete AI infrastructure that scales globally, he emphatically stated, returning to AI infrastructure as a core message of the earnings call, citing it six times.  </p><p>The platform’s ubiquity makes it a default configuration supported by nearly every DevOps cycle of cloud hyperscalers. Nvidia runs across AWS, Azure and Google Cloud. PyTorch and TensorFlow also optimize for CUDA by default. When Meta drops a new Llama model or Google updates Gemini, they target Nvidia hardware first because that&#x27;s where millions of developers already work. The ecosystem creates its own gravity.</p><p>The networking business validates the AI infrastructure strategy. Revenue hit $7.3 billion in Q2, up 98% year over year. <a href="https://www.nvidia.com/en-us/data-center/nvlink/">NVLink</a> connects GPUs at speeds traditional networking can&#x27;t touch. Huang revealed the real economics during the call: Nvidia captures about 35% of a typical gigawatt AI factory&#x27;s budget.</p><p>“Out of a gigawatt AI factory, which can go anywhere from 50 to, you know, plus or minus 10%, let’s say, to $60 billion, we represent about 35% plus or minus of that. … And of course, what you get for that is not a GPU. … we’ve really transitioned to become an AI infrastructure company,” Huang said.</p><p>That’s not just selling chips. that’s owning the architecture and capturing a significant portion of the entire AI build-out, powered by leading-edge networking and compute platforms like NVLink rack-scale systems and Spectrum X Ethernet.</p><h2>Market dynamics are shifting quickly as Nvidia continues reporting strong results</h2><p>Nvidia&#x27;s revenue growth decelerated from triple digits to 56% year over year. While that’s still impressive, it’s clear the trajectory of the company’s growth is changing. Competition is starting to have an effect on their growth, with this quarter seeing the most noticeable impact.  </p><p>In particular, China’s strategic role in the global AI race drew pointed attention from analysts. As Joe Moore of <a href="https://www.morganstanley.com/">Morgan Stanley</a> probed late in the call, Huang estimated the 2025 China AI infrastructure opportunity at $50 billion. He communicated both optimism about the scale (“the second largest computing market in the world,” with “about 50% of the world’s AI researchers”) and realism about regulatory friction.</p><p>A third pivotal force shaping Nvidia’s trajectory is the expanding complexity and cost of AI infrastructure itself. As hyperscalers and long-standing Nvidia clients invest billions in next-generation build-outs, the networking demands, compute and energy efficiency have intensified.</p><p>Huang’s comments highlighted how “orders of magnitude speed up” from new platforms like Blackwell and innovations in NVLink, InfiniBand, and Spectrum XGS networking redefine the economic returns for customers’ data center capital. Meanwhile, supply chain pressures and the need for constant technological reinvention mean Nvidia must maintain a relentless pace and adaptability to remain entrenched as the preferred architecture provider.</p><h2>Nvidia’s path forward is clear</h2><p>Nvidia issuing guidance for Q3 of $54 billion sends the signal that the core part of their DNA is as strong as ever. Continually improving Blackwell while developing Rubin architecture is evidence that their ability to innovate is as strong as ever.</p><p>The question is whether a new type of innovative challenge they’re facing is one they can take on and win with the same level of development intensity they’ve shown in the past. VentureBeat expects Broadcom to continue aggressively pursuing new hyperscaler partnerships and strengthen its roadmap for specific optimizations aimed at inference workloads. Every ASIC competitor will take the competitive intensity they have to a new level, looking to get design wins that create a higher switching costs as well.</p><p>Huang closed the earnings call, acknowledging the stakes: &quot;A new industrial revolution has started. The AI race is on.&quot; That race includes serious competitors Nvidia dismissed just two years ago. Broadcom, Google, Amazon and others invest billions in custom silicon. They&#x27;re not experimenting anymore. They&#x27;re shipping at scale.</p><p>Nvidia faces its strongest competition since CUDA&#x27;s dominance began. The company&#x27;s $46.7 billion quarter proves its strength. However, custom silicon&#x27;s momentum suggests that the game has changed. The next chapter will test whether Nvidia&#x27;s platform advantages outweigh ASIC economics. VentureBeat expects technology buyers to follow the path of fund managers, betting on both Nvidia to sustain its lucrative customer base and ASIC competitors to secure design wins as intensifying competition drives greater market fragmentation.</p>]]></description>
            <author>louiswcolumbus@gmail.com (Louis Columbus)</author>
            <category>Business</category>
            <category>Infrastructure</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/5k7SJZn7SMWjP9lzTH3iSa/67d7010c1811b94e0efbe26fd087379b/asics-advance-against-nvidia.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[At 21, he bootstrapped Kuse.ai to $10M ARR in 60 days, no VC, zero marketing spend]]></title>
            <link>https://venturebeat.com/business/at-21-he-bootstrapped-kuse-ai-to-10m-arr-in-60-days-no-vc-zero-marketing-spend</link>
            <guid isPermaLink="false">wp-3016085</guid>
            <pubDate>Tue, 26 Aug 2025 20:39:57 GMT</pubDate>
            <description><![CDATA[<p><i>No VC. No ads. Just product.</i></p><p>While Silicon Valley startups torch millions chasing vanity metrics, Ken Choi and the team at Kuse.ai took a different approach. At just 21, Choi walked out of college to help build an AI company that he says now boasts $10M in ARR, which he pulled off in just 60 days.</p><p>Their marketing budget was effectively zero.</p><p>According to Choi, the team views this as just the beginning of their work.</p><h2>Breaking free from the VC playbook</h2><p>Kuse’s path runs counter to nearly every Silicon Valley norm. There’s no VC money fueling their servers. In fact, Choi insists they’ll never raise venture capital.</p><p>“VCs want growth at all costs. We wanted a real business,” Choi says. “We didn’t hire a sales team. We didn’t buy ads. We built something people needed—and they found us.”</p><p>That “something” is Kuse’s AI canvas—a workspace where messy, unstructured inputs like PDFs, videos, and spreadsheets are transformed into structured, shareable deliverables with the help of AI agents.</p><h2>The hypergrowth paradox and the $10M ARR feature</h2><p>Choi initially led Kuse’s growth efforts, overseeing a sharp increase in users within six months. But there was a problem: zero profitability.</p><p>“It was brutal,” Choi recalls. “We experienced high traffic but little to no revenue. It taught me hypergrowth isn’t a business model—it’s a distraction if you’re not converting.”</p><p>The team reached a turning point when they identified an unmet need: professionals in consulting, education, and law needed AI tools that could create high-precision, template-driven documents—something no existing AI product handled well.</p><p>Kuse launched DocX, its format-consistent doc generation feature that delivers new content with AI in the exact same layout with no manual fixes. It gained widespread attention within weeks.</p><p>Choi recalls that growth came quickly once users began sharing the product organically. There was no reliance on ads or a PR agency, just word of mouth driven by its ability to address a clear need.</p><h2>Marketing? “A great product IS marketing.”</h2><p>Unlike most startups, Kuse has no marketing budget. Choi’s take:</p><p>“People say great products don’t need marketing. That’s half true. A great product is marketing—if it’s 10x better, users will do the distribution for you.”</p><p>Choi still acknowledges the importance of marketing. Instead, he redefines it: “We do marketing through precision—knowing exactly who we’re for, and speaking their language with every feature, every UI choice.”</p><p>Kuse’s ambition goes far beyond niche AI tools. Choi envisions creating tools designed specifically for the AI-native generation.</p><p>Traditional tools were designed for static documents. Our focus is on dynamic, AI-powered workflows,” he says.</p><p>His message to other founders is equally bold: “Burn the investor decks. Ditch the VC handcuffs. You don’t need their money or their rules. Build something so good users can’t stop talking about it—and they’ll take you further than any investor deck ever could.”</p><h3>About Kuse.ai</h3><p>Kuse.ai is an AI startup developing next-generation workplace solutions tailored to the AI-native generation. Built by a team of well-equipped founders, it is trusted by users in 60+ countries to help professionals turn messy, unstructured inputs into structured, sharable deliverables—on a dynamic visual canvas that keeps context clear, connected, and under your control. Learn more at<a href="http://kuse.ai/"> </a><a href="http://kuse.ai/">kuse.ai</a>.</p><hr/><p><i>VentureBeat newsroom and editorial staff were not involved in the creation of this content. </i></p>]]></description>
            <category>Business</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/uWVLq6FXHi9ytCjeZGWPL/bfbf2d933af7aa3aa4a294715e694d81/Ascend-123-Kuse.ai-hero.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>