<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Big Data | VentureBeat</title>
        <link>https://venturebeat.com/category/big-data/feed/</link>
        <description>Transformative tech coverage that matters</description>
        <lastBuildDate>Fri, 03 Apr 2026 13:45:43 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>Copyright 2026, VentureBeat</copyright>
        <item>
            <title><![CDATA[How AI tax startup Blue J torched its entire business model for ChatGPT—and became a $300 million company]]></title>
            <link>https://venturebeat.com/technology/how-ai-tax-startup-blue-j-torched-its-entire-business-model-for-chatgpt-and</link>
            <guid isPermaLink="false">6Ybdo2B165Gygc8335pG9X</guid>
            <pubDate>Tue, 18 Nov 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[<p>In the winter of 2022, as the tech world was becoming mesmerized by the sudden, explosive arrival of OpenAI’s ChatGPT, <a href="https://jackmanlaw.utoronto.ca/people/benjamin-alarie"><u>Benjamin Alarie</u></a> faced a pivotal choice. His legal tech startup, <a href="https://www.bluej.com/"><u>Blue J</u></a>, had a respectable business built on the AI of a bygone era, serving hundreds of law firms with predictive models. But it had hit a ceiling.</p><p>Alarie, a <a href="https://jackmanlaw.utoronto.ca/people/benjamin-alarie"><u>tenured tax law professor</u></a> at the <a href="https://www.utoronto.ca/"><u>University of Toronto</u></a>, saw the nascent, error-prone, yet powerful capabilities of large language models not as a curiosity, but as the future. He made a high-stakes decision: to pivot his entire company, which had been painstakingly built over nearly a decade, and rebuild it from the ground up on this unproven technology.</p><p>That bet has paid off handsomely. Blue J has since quietly secured a <a href="https://finance.yahoo.com/news/blue-j-announces-122m-series-100000765.html"><u>$122 million Series D</u></a> funding round co-led by <a href="https://www.oakhcft.com/"><u>Oak HC/FT</u></a> and <a href="https://sapphireventures.com/"><u>Sapphire Ventures</u></a>, placing the company&#x27;s valuation at <a href="https://www.theglobeandmail.com/business/article-torontos-blue-j-legal-raises-167-million-as-demand-for-its-chatgpt/"><u>over $300 million</u></a>. The move transformed Blue J from a niche player into one of Canada&#x27;s fastest-growing legal tech firms, multiplying its revenue roughly twelve-fold and attracting 10 to 15 new customers every day.</p><p>The company now serves more than 3,500 organizations, including global accounting giant <a href="https://kpmg.com/uk/en.html"><u>KPMG UK</u></a> and several Fortune 500 companies. It is tackling a critical bottleneck in the professional services industry: a severe and worsening talent shortage. <a href="https://www.kent.edu/business/accountant-shortage-united-states-everything-you-need-know"><u>The U.S. has 340,000 fewer accountants than it did five years ago</u></a>, and with 75% of current CPAs expected to retire in the next decade, firms are desperate for tools that can amplify the productivity of their remaining experts.</p><p>“What once took tax professionals 15 hours of manual research to do can now be completed in about 15 seconds with Blue J,” Alarie, the company&#x27;s CEO, said in an exclusive interview with VentureBeat. &quot;That value proposition—we can take hours of work and turn it into seconds of work—that is driving a lot of this.&quot;</p><h3><b>When the dean&#x27;s biography was wrong: the moment that changed everything</b></h3><p>Alarie vividly remembers January 2023, when the dean of the law school stopped by his office for New Year&#x27;s greetings. He asked her about ChatGPT and prompted the AI to describe her. ChatGPT confidently generated a biography. Some details were accurate. Others were completely fabricated.</p><p>&quot;She was like, &#x27;Okay, this is really kind of scary. This is wrong, and this has implications,&#x27;&quot; Alarie said. Yet that moment of obvious failure didn&#x27;t deter him. Instead, it crystallized his conviction.</p><p>The company&#x27;s first iteration, launched in 2015, used supervised machine learning to build predictive models that could forecast judicial outcomes on specific tax issues. While technically sophisticated, it had a fundamental flaw: it couldn&#x27;t answer every tax research question.</p><p>&quot;The challenge was it couldn&#x27;t answer every tax research question, which was really the holy grail,&quot; Alarie said. Customers loved the tool when it applied to their problem, but would quickly abandon it when it didn&#x27;t. Revenue plateaued around $2 million annually.</p><p>Despite ChatGPT&#x27;s notorious hallucinations, Alarie convinced his board to make the pivot. &quot;I had this conviction that if we continued down that path, we weren&#x27;t going to be able to address our number one limitation,&quot; he said. &quot;Large language models seemed like a very promising direction.&quot;</p><p>He gave his team six months to deliver a working product.</p><h3><b>From 90-second responses to 3 million queries: How Blue J tamed AI hallucinations</b></h3><p>By August 2023, <a href="https://www.bluej.com/"><u>Blue J</u></a> was ready to launch. What they released was, in Alarie&#x27;s candid assessment, &quot;super janky.&quot; The system took 90 seconds to respond. About half the answers had issues. The <a href="https://www.ibm.com/think/topics/net-promoter-score"><u>Net Promoter Score</u></a> registered at just 20.</p><p>What transformed that flawed product into today&#x27;s platform — with response times measured in seconds, a dissatisfaction rate of just one in 700 queries, and an NPS score in the mid-80s — was relentless focus on three strategic pillars.</p><p>First is proprietary content at massive scale. <a href="https://www.bluej.com/"><u>Blue J</u></a> secured exclusive licensing with <a href="https://www.bluej.com/blog/tax-notes-news-and-commentaryin-ask-blue-j?sem_account_id=4855258191&amp;sem_campaign_id=23238502772&amp;sem_ad_group_id=&amp;sem_device_type=c&amp;sem_ad_id=&amp;sem_network=x&amp;utm_source=google&amp;utm_medium=cpc&amp;utm_term=&amp;utm_campaign=&amp;hsa_acc=4855258191&amp;hsa_cam=23238502772&amp;hsa_grp=&amp;hsa_ad=&amp;hsa_src=x&amp;hsa_tgt=&amp;hsa_kw=&amp;hsa_mt=&amp;hsa_net=adwords&amp;hsa_ver=3&amp;gad_source=1&amp;gad_campaignid=23233367676&amp;gbraid=0AAAAAC_mL8gv8eN0VUk5K0KpOGKmC3M6H&amp;gclid=Cj0KCQiArOvIBhDLARIsAPwJXOaLxuKFmZR9otFp5M1er_t_p8gixKHhUXWCf4Lkky64zSQVl4l40QgaArObEALw_wcB"><u>Tax Analysts (Tax Notes)</u></a> and <a href="https://www.businesswire.com/news/home/20250903168687/en/Blue-J-and-IBFD-Unveil-AI-Platform-for-Instant-Cross-Border-Tax-Research"><u>IBFD</u></a>, the Amsterdam-based global tax authority covering 220+ jurisdictions. &quot;We are the only platform on earth that takes in the best U.S. tax information from Tax Notes and the best global tax information from IBFD,&quot; Alarie said.</p><p>Second is deep human expertise. Blue J employs tax experts led by <a href="https://www.bluej.com/about-us"><u>Susan Massey</u></a>, who spent 13 years at the <a href="https://www.irs.gov/about-irs/office-of-chief-counsel-at-a-glance"><u>IRS Office of Chief Counsel</u></a> as Branch Chief for Corporate Tax. Her team constantly tests the AI and refines its performance.</p><p>Third is an unprecedented feedback flywheel. With over 3 million tax research queries processed in 2025, Blue J is amassing unparalleled data. Each query generates feedback that flows back into the system.</p><p>Weekly active user rates hover between 75% and 85%, compared to 15% to 25% for traditional platforms. &quot;A charitable ratio is like we&#x27;re five times more intensively used,&quot; Alarie noted.</p><h3><b>Inside Blue J&#x27;s early access partnership with OpenAI</b></h3><p>Blue J maintains an <a href="https://openai.com/index/blue-j/"><u>unusually close relationship with OpenAI</u></a> that has proven crucial to its success. &quot;We have a very good relationship with OpenAI, and we get early access to their models,&quot;Alarie said. &quot;It&#x27;s quite collaborative. We give them a lot of really high quality feedback about how well different versions of forthcoming models are performing.&quot;</p><p>This feedback proves valuable because Blue J has developed what Alarie calls &quot;ecologically valid&quot; test questions — drawn from actual tax professional queries, with correct answers determined by Blue J&#x27;s expert team. This helps OpenAI improve performance on complex reasoning tasks.</p><p>The company tests models from all major providers — <a href="https://openai.com/"><u>OpenAI</u></a>, <a href="https://www.anthropic.com/"><u>Anthropic</u></a>, <a href="https://gemini.google.com/app"><u>Google&#x27;s Gemini</u></a>, and open-source alternatives — continuously evaluating which performs best. &quot;We&#x27;re not necessarily 100% committed to any particular provider,&quot; he explained. &quot;We&#x27;re testing all the time.&quot;</p><p>This approach helps <a href="https://www.bluej.com/"><u>Blue J</u></a> navigate a challenging business model: charging approximately $1,500 per seat annually for unlimited queries while absorbing variable compute costs. &quot;We&#x27;ve pre-committed to delivering them a really good user experience, unlimited tax research answers at a fixed price,&quot; Alarie said. &quot;We&#x27;re absorbing a lot of that risk.&quot;</p><p>Competition among foundation model providers creates downward pressure on API pricing, while Blue J&#x27;s conservative usage modeling has proven accurate. Gross revenue retention exceeds 99%, while net revenue retention reaches 130% — considered best-in-class for SaaS businesses.</p><h3><b>Taking on Thomson Reuters and LexisNexis with 75% weekly engagement</b></h3><p><a href="https://www.bluej.com/"><u>Blue J</u></a> faces competition from established publishers like <a href="https://www.thomsonreuters.com/en"><u>Thomson Reuters</u></a>, <a href="https://www.lexisnexis.com/en-us/gateway.page"><u>LexisNexis</u></a>, and <a href="https://pro.bloombergtax.com/discover/bloomberg-tax-suite-demo-request/?trackingcode=BTSS24112987&amp;utm_medium=paidsearch&amp;utm_source=google&amp;keyword=bloomberg%20tax&amp;matchtype=e&amp;&amp;&amp;&amp;&amp;gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=9457544822&amp;gbraid=0AAAAAD_kFA38giYJV5mEqqUUM_e_Bd11j&amp;gclid=Cj0KCQiArOvIBhDLARIsAPwJXOYA_YF4kYIxnHQPMofjBCuaKzvTAwp3cwxeBIaIyG6nGY9ni0tVn28aAqE0EALw_wcB"><u>Bloomberg</u></a>, all of which announced AI capabilities throughout 2023 and 2024. Yet Blue J&#x27;s engagement metrics suggest it has captured significant momentum, growing from just 200 customers in 2021 to over 3,500 organizations today.</p><p>The daily updates prove crucial. While the tax code itself changes only when Congress acts, the ecosystem evolves constantly through IRS regulations, new rulings, and court cases. All 50 states modify their tax codes regularly.</p><p>&quot;Things are changing literally every day,&quot; Alarie said. &quot;Every day we&#x27;re updating the materials, and that&#x27;s just the U.S. We cover Canada, we cover the UK. The aspirations are truly global for this thing.&quot;</p><p>Alarie&#x27;s ambitions extend beyond building a successful startup. As author of the award-winning book &quot;<a href="https://www.bluej.com/blog/the-legal-singularity-a-vision-of-ai-driven-legal-systems?sem_account_id=4855258191&amp;sem_campaign_id=23238502772&amp;sem_ad_group_id=&amp;sem_device_type=c&amp;sem_ad_id=&amp;sem_network=x&amp;utm_source=google&amp;utm_medium=cpc&amp;utm_term=&amp;utm_campaign=&amp;hsa_acc=4855258191&amp;hsa_cam=23238502772&amp;hsa_grp=&amp;hsa_ad=&amp;hsa_src=x&amp;hsa_tgt=&amp;hsa_kw=&amp;hsa_mt=&amp;hsa_net=adwords&amp;hsa_ver=3&amp;gad_source=1&amp;gad_campaignid=23233367676&amp;gbraid=0AAAAAC_mL8iHj3mHQ1PxHjT5IGoPk6hS3&amp;gclid=Cj0KCQiArOvIBhDLARIsAPwJXOY12FNLsxFtKsY3CynN6nBeW43r0qystVycPX4N7qFSfk51y-MYkY4aAhHuEALw_wcB"><u>The Legal Singularity</u></a>&quot; and faculty affiliate at the <a href="https://vectorinstitute.ai/"><u>Vector Institute for Artificial Intelligence</u></a>, he has spent years contemplating AI&#x27;s long-term impact on law.</p><p>In academic papers published in Tax Notes throughout <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4476510"><u>2023</u></a> and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4730883"><u>2024</u></a>, he chronicled generative AI&#x27;s rise, predicting that &quot;clients will become substantially more sophisticated&quot; and that AI would push human experts toward higher-value strategic roles rather than routine research.</p><h3><b>Blue J&#x27;s $122 million plan: From tax research to &#x27;global tax cognition&#x27;</b></h3><p>The <a href="https://www.bluej.com/blog/blue-j-announces-122m-series-d-financing-led-by-oak-hc-ft-and-sapphire-ventures"><u>Series D funding</u></a>, which brought total capital raised to over $133 million, will fuel aggressive geographic and product expansion. Blue J already operates in the U.S., Canada, and the U.K., with plans to eventually cover 220+ jurisdictions through its IBFD partnership.</p><p>Future capabilities could include automated memo generation, tax form completion, document drafting, and conversational history maintaining context across sessions—transforming Blue J from a research tool into what Alarie describes as &quot;the operating layer for global tax cognition.&quot;</p><p>For all its success, Blue J operates in a domain where errors carry serious consequences. The hallucination problem hasn&#x27;t been eliminated — it&#x27;s been minimized through careful engineering, content curation, and human oversight. Blue J has trained its models to acknowledge when they cannot answer a question rather than fabricate information.</p><p>The business also faces economic risks if compute costs spiral or usage patterns exceed projections. And subtler questions loom about professional judgment: as AI systems become more capable, will users defer to outputs without sufficient critical evaluation?</p><h3><b>From 15 hours to 15 seconds: What Blue J&#x27;s AI pivot teaches every industry</b></h3><p>Blue J&#x27;s transformation offers lessons beyond tax software. The company&#x27;s willingness to abandon eight years of proprietary technology and rebuild on an initially unreliable foundation required both courage and calculated risk-taking.</p><p>The decision paid off not because generative AI was inherently superior to supervised machine learning in all dimensions, but because it addressed the right problem: comprehensiveness rather than precision in narrow domains. Tax professionals didn&#x27;t need 95% accuracy on 5% of questions. They needed good-enough accuracy on 100% of questions.</p><p>The improvement from an NPS of 20 to 84 in just over two years reflects relentless iteration informed by massive data collection. The content partnerships created differentiation that pure technology couldn&#x27;t replicate. The team of tax experts provided domain knowledge necessary to ensure reliability.</p><p>Most fundamentally, Blue J recognized that the real competition wasn&#x27;t other AI startups or even established publishers. It was the old way of doing things — the 15 hours of manual research, the institutional knowledge locked in retiring professionals&#x27; heads.</p><p>&quot;People are like, &#x27;What does Blue J do? They provide better tax answers. Okay, I think we need that,&#x27;&quot; Alarie reflected.</p><p>As AI transforms profession after profession, that clarity of purpose may matter more than technological sophistication. The future belongs not to those who build the most advanced AI, but to those who most effectively harness it to solve problems humans actually have.</p><p>For a tax law professor who started with frustration about inefficient research methods, building a $300 million company marks an audacious endpoint. For the thousands of professionals now answering complex questions in 15 seconds instead of 15 hours, it represents the future of their profession, arriving faster than most expected.</p><p>The bet on ChatGPT when it was still hallucinating biographies has become a validation that sometimes the riskiest move is not to move at all.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Big Data</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/3L8roUj3sEWN26xcxHHEHd/3b4214627fb0a5e6f1197c72d8436ebc/Benjamin_Alarie__1_.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Google debuts AI chips with 4X performance boost, secures Anthropic megadeal worth billions]]></title>
            <link>https://venturebeat.com/infrastructure/google-debuts-ai-chips-with-4x-performance-boost-secures-anthropic-megadeal</link>
            <guid isPermaLink="false">6UtGh06unalVUB7SjnoKsm</guid>
            <pubDate>Thu, 06 Nov 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://cloud.google.com/?hl=en"><u>Google Cloud</u></a> is introducing what it calls its most powerful artificial intelligence infrastructure to date, unveiling a seventh-generation <a href="https://cloud.google.com/tpu?hl=en"><u>Tensor Processing Unit</u></a> and expanded <a href="https://cloud.google.com/discover/what-are-arm-based-processors?hl=en"><u>Arm-based computing options</u></a> designed to meet surging demand for AI model deployment — what the company characterizes as a fundamental industry shift from training models to serving them to billions of users.</p><p>The announcement, made Thursday, centers on <a href="https://cloud.google.com/resources/ironwood-tpu-interest?utm_source=cgc-blog&amp;utm_medium=blog&amp;utm_campaign=FY25-Q2-global-ENT33820-website-cs-ironwood-tpu-interest&amp;utm_content=ironwood_announcement_blog&amp;utm_term=ironwood&amp;hl=en"><u>Ironwood</u></a>, Google&#x27;s latest custom AI accelerator chip, which will become generally available in the coming weeks. In a striking validation of the technology, <a href="https://www.anthropic.com/"><u>Anthropic</u></a>, the AI safety company behind the Claude family of models, disclosed plans to access up to <a href="https://www.googlecloudpresscorner.com/2025-10-23-Anthropic-to-Expand-Use-of-Google-Cloud-TPUs-and-Services"><u>one million of these TPU chips</u></a> — a commitment worth tens of billions of dollars and among the largest known AI infrastructure deals to date.</p><p>The move underscores an intensifying competition among cloud providers to control the infrastructure layer powering artificial intelligence, even as questions mount about whether the industry can sustain its current pace of capital expenditure. Google&#x27;s approach — building custom silicon rather than relying solely on <a href="https://www.reuters.com/business/nvidia-poised-record-5-trillion-market-valuation-2025-10-29/"><u>Nvidia&#x27;s dominant GPU chips</u></a> — amounts to a long-term bet that vertical integration from chip design through software will deliver superior economics and performance.</p><h3><b>Why companies are racing to serve AI models, not just train them</b></h3><p>Google executives framed the announcements around what they call &quot;the age of inference&quot; — a transition point where companies shift resources from training frontier AI models to deploying them in production applications serving millions or billions of requests daily.</p><p>&quot;Today&#x27;s frontier models, including Google&#x27;s Gemini, Veo, and Imagen and Anthropic&#x27;s Claude train and serve on Tensor Processing Units,&quot; said Amin Vahdat, vice president and general manager of AI and Infrastructure at Google Cloud. &quot;For many organizations, the focus is shifting from training these models to powering useful, responsive interactions with them.&quot;</p><p>This transition has profound implications for infrastructure requirements. Where training workloads can often tolerate batch processing and longer completion times, inference — the process of actually running a trained model to generate responses — demands consistently low latency, high throughput, and unwavering reliability. A chatbot that takes 30 seconds to respond, or a coding assistant that frequently times out, becomes unusable regardless of the underlying model&#x27;s capabilities.</p><p>Agentic workflows — where AI systems take autonomous actions rather than simply responding to prompts — create particularly complex infrastructure challenges, requiring tight coordination between specialized AI accelerators and general-purpose computing.</p><h3><b>Inside Ironwood&#x27;s architecture: 9,216 chips working as one supercomputer</b></h3><p><a href="https://cloud.google.com/resources/ironwood-tpu-interest?utm_source=cgc-blog&amp;utm_medium=blog&amp;utm_campaign=FY25-Q2-global-ENT33820-website-cs-ironwood-tpu-interest&amp;utm_content=ironwood_announcement_blog&amp;utm_term=ironwood&amp;hl=en"><u>Ironwood</u></a> is more than incremental improvement over Google&#x27;s sixth-generation TPUs. According to technical specifications shared by the company, it delivers more than four times better performance for both training and inference workloads compared to its predecessor — gains that Google attributes to a system-level co-design approach rather than simply increasing transistor counts.</p><p>The architecture&#x27;s most striking feature is its scale. A single Ironwood &quot;pod&quot; — a tightly integrated unit of TPU chips functioning as one supercomputer — can connect up to 9,216 individual chips through Google&#x27;s proprietary <a href="https://blog.google/products/google-cloud/ironwood-tpu-age-of-inference/"><u>Inter-Chip Interconnect network</u></a> operating at 9.6 terabits per second. To put that bandwidth in perspective, it&#x27;s roughly equivalent to downloading the entire Library of Congress in under two seconds.</p><p>This massive interconnect fabric allows the 9,216 chips to share access to 1.77 petabytes of <a href="https://blog.google/products/google-cloud/ironwood-tpu-age-of-inference/"><u>High Bandwidth Memory</u></a> — memory fast enough to keep pace with the chips&#x27; processing speeds. That&#x27;s approximately 40,000 high-definition Blu-ray movies&#x27; worth of working memory, instantly accessible by thousands of processors simultaneously. &quot;For context, that means Ironwood Pods can deliver 118x more FP8 ExaFLOPS versus the next closest competitor,&quot; Google stated in technical documentation.</p><p>The system employs <a href="https://www.opencompute.org/projects/optical-circuit-switching"><u>Optical Circuit Switching</u></a> technology that acts as a &quot;dynamic, reconfigurable fabric.&quot; When individual components fail or require maintenance — inevitable at this scale — the OCS technology automatically reroutes data traffic around the interruption within milliseconds, allowing workloads to continue running without user-visible disruption.</p><p>This reliability focus reflects lessons learned from deploying five previous TPU generations. Google reported that its fleet-wide uptime for liquid-cooled systems has maintained approximately 99.999% availability since 2020 — equivalent to less than six minutes of downtime per year.</p><h3><b>Anthropic&#x27;s billion-dollar bet validates Google&#x27;s custom silicon strategy</b></h3><p>Perhaps the most significant external validation of Ironwood&#x27;s capabilities comes from <a href="https://www.googlecloudpresscorner.com/2025-10-23-Anthropic-to-Expand-Use-of-Google-Cloud-TPUs-and-Services"><u>Anthropic&#x27;s commitment to access up to one million TPU chips</u></a> — a staggering figure in an industry where even clusters of 10,000 to 50,000 accelerators are considered massive.</p><p>&quot;Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,&quot; said Krishna Rao, Anthropic&#x27;s chief financial officer, in the official partnership agreement. &quot;Our customers — from Fortune 500 companies to AI-native startups — depend on Claude for their most important work, and this expanded capacity ensures we can meet our exponentially growing demand.&quot;</p><p>According to a separate statement, Anthropic will have access to &quot;well over a gigawatt of capacity coming online in 2026&quot; — enough electricity to power a small city. The company specifically cited TPUs&#x27; &quot;price-performance and efficiency&quot; as key factors in the decision, along with &quot;existing experience in training and serving its models with TPUs.&quot;</p><p>Industry analysts estimate that a commitment to access one million TPU chips, with associated infrastructure, networking, power, and cooling, likely represents a <a href="https://www.reuters.com/technology/anthropic-expand-use-google-clouds-tpu-chips-2025-10-23/"><u>multi-year contract worth tens of billions of dollars</u></a> — among the largest known cloud infrastructure commitments in history.</p><p>James Bradbury, Anthropic&#x27;s head of compute, elaborated on the inference focus: &quot;Ironwood&#x27;s improvements in both inference performance and training scalability will help us scale efficiently while maintaining the speed and reliability our customers expect.&quot;</p><h3><b>Google&#x27;s Axion processors target the computing workloads that make AI possible</b></h3><p>Alongside <a href="https://cloud.google.com/resources/ironwood-tpu-interest?utm_source=cgc-blog&amp;utm_medium=blog&amp;utm_campaign=FY25-Q2-global-ENT33820-website-cs-ironwood-tpu-interest&amp;utm_content=ironwood_announcement_blog&amp;utm_term=ironwood&amp;hl=en"><u>Ironwood</u></a>, Google introduced expanded options for its <a href="https://cloud.google.com/products/axion?hl=en"><u>Axion processor family</u></a> — custom Arm-based CPUs designed for general-purpose workloads that support AI applications but don&#x27;t require specialized accelerators.</p><p>The <a href="http://forms.gle/HYY5FWRKewYuDMB27"><u>N4A instance type</u></a>, now entering preview, targets what Google describes as &quot;microservices, containerized applications, open-source databases, batch, data analytics, development environments, experimentation, data preparation and web serving jobs that make AI applications possible.&quot; The company claims N4A delivers up to 2X better price-performance than comparable current-generation x86-based virtual machines.</p><p>Google is also <a href="https://docs.google.com/forms/d/e/1FAIpQLSd14sMYz79SeRI665dM7lnUbsAg7zilVPdDfK2_6u1vBmiUfg/viewform?usp=send_form"><u>previewing C4A metal</u></a>, its first bare-metal Arm instance, which provides dedicated physical servers for specialized workloads such as Android development, automotive systems, and software with strict licensing requirements.</p><p>The Axion strategy reflects a growing conviction that the future of computing infrastructure requires both specialized AI accelerators and highly efficient general-purpose processors. While a TPU handles the computationally intensive task of running an AI model, Axion-class processors manage data ingestion, preprocessing, application logic, API serving, and countless other tasks in a modern AI application stack.</p><p>Early customer results suggest the approach delivers measurable economic benefits. Vimeo reported observing &quot;a 30% improvement in performance for our core transcoding workload compared to comparable x86 VMs&quot; in initial N4A tests. ZoomInfo measured &quot;a 60% improvement in price-performance&quot; for data processing pipelines running on Java services, according to Sergei Koren, the company&#x27;s chief infrastructure architect.</p><h3><b>Software tools turn raw silicon performance into developer productivity</b></h3><p>Hardware performance means little if developers cannot easily harness it. Google emphasized that <a href="https://cloud.google.com/resources/ironwood-tpu-interest?utm_source=cgc-blog&amp;utm_medium=blog&amp;utm_campaign=FY25-Q2-global-ENT33820-website-cs-ironwood-tpu-interest&amp;utm_content=ironwood_announcement_blog&amp;utm_term=ironwood&amp;hl=en"><u>Ironwood</u></a> and <a href="https://cloud.google.com/products/axion?hl=en"><u>Axion</u></a> are integrated into what it calls <a href="https://cloud.google.com/solutions/ai-hypercomputer"><u>AI Hypercomputer</u></a> — &quot;an integrated supercomputing system that brings together compute, networking, storage, and software to improve system-level performance and efficiency.&quot;</p><p>According to an October 2025 IDC Business Value Snapshot study, AI Hypercomputer customers achieved on average 353% three-year return on investment, 28% lower IT costs, and 55% more efficient IT teams.</p><p>Google disclosed several software enhancements designed to maximize Ironwood utilization. <a href="https://cloud.google.com/kubernetes-engine?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=na-US-all-en-dr-bkws-all-all-trial-e-dr-1710134&amp;utm_content=text-ad-none-any-DEV_c-CRE_772251321321-ADGP_Hybrid+%7C+BKWS+-+EXA+%7C+Txt-AppMod-GKE-Kubernetes+Engine-KWID_369526655975-kwd-369526655975&amp;utm_term=KW_google+kubernetes+engine-ST_google+kubernetes+engine&amp;gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=23052915519&amp;gclid=Cj0KCQiAiKzIBhCOARIsAKpKLAMMFmNaZgmWnQ3CYrziXElfmMXmQphYqoSvICvf6jfUjLqR9XqFt3oaArkYEALw_wcB&amp;hl=en"><u>Google Kubernetes Engine</u></a> now offers advanced maintenance and topology awareness for TPU clusters, enabling intelligent scheduling and highly resilient deployments. The company&#x27;s <a href="https://github.com/AI-Hypercomputer/maxtext"><u>open-source MaxText framework</u></a> now supports advanced training techniques including Supervised Fine-Tuning and Generative Reinforcement Policy Optimization.</p><p>Perhaps most significant for production deployments, Google&#x27;s <a href="https://docs.cloud.google.com/kubernetes-engine/docs/concepts/about-gke-inference-gateway"><u>Inference Gateway</u></a> intelligently load-balances requests across model servers to optimize critical metrics. According to Google, it can reduce time-to-first-token latency by 96% and serving costs by up to 30% through techniques like prefix-cache-aware routing.</p><p>The <a href="https://docs.cloud.google.com/kubernetes-engine/docs/concepts/about-gke-inference-gateway"><u>Inference Gateway</u></a> monitors key metrics including KV cache hits, GPU or TPU utilization, and request queue length, then routes incoming requests to the optimal replica. For conversational AI applications where multiple requests might share context, routing requests with shared prefixes to the same server instance can dramatically reduce redundant computation.</p><h3><b>The hidden challenge: powering and cooling one-megawatt server racks</b></h3><p>Behind these announcements lies a massive physical infrastructure challenge that Google addressed at the recent <a href="https://www.opencompute.org/summit/emea-summit"><u>Open Compute Project EMEA Summit</u></a>. The company disclosed that it&#x27;s implementing +/-400 volt direct current power delivery capable of supporting up to one megawatt per rack — a tenfold increase from typical deployments.</p><p>&quot;The AI era requires even greater power delivery capabilities,&quot; explained Madhusudan Iyengar and Amber Huffman, Google principal engineers, in an <a href="https://cloud.google.com/blog/topics/systems/enabling-1-mw-it-racks-and-liquid-cooling-at-ocp-emea-summit"><u>April 2025 blog post</u></a>. &quot;ML will require more than 500 kW per IT rack before 2030.&quot;</p><p>Google is collaborating with Meta and Microsoft to standardize electrical and mechanical interfaces for high-voltage DC distribution. The company selected <a href="https://www.opencompute.org/files/OCP18-400VDC-Efficiency-02.pdf"><u>400 VDC</u></a> specifically to leverage the supply chain established by electric vehicles, &quot;for greater economies of scale, more efficient manufacturing, and improved quality and scale.&quot;</p><p>On cooling, Google revealed it will contribute its fifth-generation cooling distribution unit design to the Open Compute Project. The company has deployed liquid cooling &quot;at GigaWatt scale across more than 2,000 TPU Pods in the past seven years&quot; with fleet-wide availability of approximately 99.999%.</p><p>Water can transport approximately 4,000 times more heat per unit volume than air for a given temperature change — critical as individual AI accelerator chips increasingly dissipate 1,000 watts or more.</p><h3><b>Custom silicon gambit challenges Nvidia&#x27;s AI accelerator dominance</b></h3><p>Google&#x27;s announcements come as the AI infrastructure market reaches an inflection point. While Nvidia maintains overwhelming dominance in AI accelerators — holding an estimated 80-95% market share — cloud providers are increasingly investing in custom silicon to differentiate their offerings and improve unit economics.</p><p>Amazon Web Services pioneered this approach with <a href="https://aws.amazon.com/ec2/graviton/"><u>Graviton Arm-based CPUs</u></a> and <a href="https://aws.amazon.com/ai/machine-learning/inferentia/"><u>Inferentia</u></a> / <a href="https://aws.amazon.com/ai/machine-learning/trainium/"><u>Trainium</u></a> AI chips. Microsoft has developed <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/cobalt-overview"><u>Cobalt processors</u></a> and is reportedly working on AI accelerators. Google now offers the most comprehensive custom silicon portfolio among major cloud providers.</p><p>The strategy faces inherent challenges. Custom chip development requires enormous upfront investment — often billions of dollars. The software ecosystem for specialized accelerators lags behind Nvidia&#x27;s <a href="https://developer.nvidia.com/about-cuda"><u>CUDA platform</u></a>, which benefits from 15+ years of developer tools. And rapid AI model architecture evolution creates risk that custom silicon optimized for today&#x27;s models becomes less relevant as new techniques emerge.</p><p>Yet Google argues its approach delivers unique advantages. &quot;This is how we built the first TPU ten years ago, which in turn unlocked the invention of the Transformer eight years ago — the very architecture that powers most of modern AI,&quot; the company noted, referring to the seminal <a href="https://arxiv.org/abs/1706.03762"><u>&quot;Attention Is All You Need&quot; paper</u></a> from Google researchers in 2017.</p><p>The argument is that tight integration — &quot;model research, software, and hardware development under one roof&quot; — enables optimizations impossible with off-the-shelf components.</p><p>Beyond Anthropic, several other customers provided early feedback. Lightricks, which develops creative AI tools, reported that early Ironwood testing &quot;makes us highly enthusiastic&quot; about creating &quot;more nuanced, precise, and higher-fidelity image and video generation for our millions of global customers,&quot; said Yoav HaCohen, the company&#x27;s research director.</p><p>Google&#x27;s announcements raise questions that will play out over coming quarters. Can the industry sustain current infrastructure spending, with major AI companies collectively committing hundreds of billions of dollars? Will custom silicon prove economically superior to Nvidia GPUs? How will model architectures evolve?</p><p>For now, Google appears committed to a strategy that has defined the company for decades: building custom infrastructure to enable applications impossible on commodity hardware, then making that infrastructure available to customers who want similar capabilities without the capital investment.</p><p>As the AI industry transitions from research labs to production deployments serving billions of users, that infrastructure layer — the silicon, software, networking, power, and cooling that make it all run — may prove as important as the models themselves.</p><p>And if Anthropic&#x27;s willingness to commit to accessing up to one million chips is any indication, Google&#x27;s bet on custom silicon designed specifically for the age of inference may be paying off just as demand reaches its inflection point.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Big Data</category>
            <category>Infrastructure</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/3wiaJuUTbBrUaBp8cSXtO4/81095c7817da6a2967a961ed60356ed4/Ironwood_board.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Salesforce launches AI 'trust layer' to tackle enterprise deployment failures plaguing 80% of projects]]></title>
            <link>https://venturebeat.com/infrastructure/salesforce-launches-ai-trust-layer-to-tackle-enterprise-deployment-failures</link>
            <guid isPermaLink="false">1XqI5Q9sg1C7ADM0mlIz4W</guid>
            <pubDate>Thu, 02 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.salesforce.com/"><u>Salesforce Inc.</u></a> is expanding its artificial intelligence platform with new data management and governance capabilities, aiming to address what the company says is a crisis in enterprise AI adoption where more than 80% of projects fail to deliver meaningful business value.</p><p>The San Francisco-based software giant announced Thursday a suite of new tools designed to create what it calls a &quot;trusted AI foundation&quot; for enterprises struggling with fragmented data, weak governance, and security concerns that have hampered AI deployments across corporate America.</p><p>&quot;We&#x27;re seeing a lot of these AI projects really failing, and a lot of it&#x27;s because customers still have fragmented data, they still have weak governance, they still have poor security,&quot; said Desiree Motamedi, Salesforce&#x27;s senior vice president and chief marketing officer, in an exclusive interview with VentureBeat. &quot;They really want a way that they can bring AI at scale that has the accuracy, the context and the control.&quot;</p><p>The timing of Salesforce&#x27;s announcement comes as the company prepares for its annual <a href="https://www.salesforce.com/dreamforce/"><u>Dreamforce conference</u></a> next week, where CEO Marc Benioff is expected to showcase the company&#x27;s vision for what he calls the &quot;<a href="https://www.salesforce.com/ap/agentforce/agentic-enterprise/"><u>agentic enterprise</u></a>&quot; — workplaces where AI agents work alongside humans across every business function.</p><h2><b>Why most corporate AI initiatives crash and burn before reaching production</b></h2><p>The scale of AI project failures has become a significant concern for enterprise technology leaders. According to a <a href="https://www.rand.org/pubs/research_reports/RRA2680-1.html"><u>RAND Corporation study</u></a>, poor data quality, inadequate governance frameworks, and fragmented system integration are the primary culprits behind the high failure rate of corporate AI initiatives.</p><p>This challenge has created both pressure and opportunity for enterprise software providers. While companies face mounting pressure to deploy AI capabilities, many are discovering that their existing data infrastructure isn&#x27;t equipped to support reliable AI applications at scale.</p><p>Salesforce&#x27;s response centers on what Motamedi describes as three core capabilities: ensuring AI outputs are grounded in unified business data, embedding security and compliance controls into every workflow, and connecting AI agents across different platforms and data sources.</p><p>&quot;The Salesforce platform is a $7 billion business,&quot; Motamedi noted, highlighting the significant revenue opportunity the company sees in addressing enterprise AI infrastructure needs. &quot;This is a significant opportunity where we&#x27;re seeing meaningful differentiation from other vendors in the market.&quot;</p><h2><b>Inside Salesforce&#x27;s new AI tools designed to fix enterprise data chaos</b></h2><p>The company&#x27;s latest announcements include several technically sophisticated solutions aimed at different aspects of the enterprise AI challenge:</p><p><a href="https://www.salesforce.com/form/demo/data-cloud-demo/?d=7013y000002ExgGAAS&amp;nc=7013y000002EySjAAK&amp;utm_content=7013y000002ExgGAAS&amp;utm_source=google&amp;utm_medium=paid_search&amp;utm_campaign=21124892324&amp;utm_adgroup=161024384060&amp;utm_term=data%20software&amp;utm_matchtype=p&amp;gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=21124892324&amp;gbraid=0AAAAAD4PnrPBb88pPnAYrypU8WWEYdx8g&amp;gclid=CjwKCAjwxfjGBhAUEiwAKWPwDhNDbBp4P2o_OuM2D0m0MWQS3O0LZFu25_L1MoKpBPqYX5M9VTKRUBoC0ScQAvD_BwE"><u>Data Cloud Context Indexing </u></a>represents Salesforce&#x27;s approach to handling unstructured content like contracts, technical diagrams, and decision trees. The system uses what the company calls a &quot;business-aware lens&quot; to help AI agents interpret complex documents within their proper business context.</p><p>&quot;A good example is a field engineer who uploads a schematic for guided troubleshooting,&quot; Motamedi explained. &quot;Now they have that capability at their disposal, because it&#x27;s right there in that view.&quot;</p><p><a href="https://www.salesforce.com/form/demo/data-cloud-demo/?d=701ed000003k7J0AAI&amp;nc=701ed000003kDcvAAE&amp;utm_content=701ed000003k7J0AAI&amp;utm_source=google&amp;utm_medium=paid_search&amp;utm_campaign=21124892324&amp;utm_adgroup=176980111604&amp;utm_term=data%20cleansing&amp;utm_matchtype=p&amp;gclsrc=aw.ds&amp;gad_source=1&amp;gad_campaignid=21124892324&amp;gbraid=0AAAAAD4PnrPBb88pPnAYrypU8WWEYdx8g&amp;gclid=CjwKCAjwxfjGBhAUEiwAKWPwDk001xVkCvIob4Vk_cw4ILVfg3wlmxgGaglkOCLSuYHa9RkuO9aJkxoCeS8QAvD_BwE"><u>Data Cloud Clean Rooms</u></a>, now generally available, allows organizations to securely share and analyze data with partners without exposing sensitive information. Using Salesforce&#x27;s &quot;zero copy&quot; technology, companies can collaborate on data analysis without actually moving or duplicating datasets.</p><p>The clean room technology extends beyond traditional advertising applications to sectors like banking, where institutions could &quot;detect fraud, and they want to be able to do it with some of their partners. They could now do it in hours versus weeks,&quot; according to Motamedi.</p><p><a href="https://www.tableau.com/products/tableau-semantics"><u>Tableau Semantics</u></a> addresses one of the most persistent challenges in enterprise data management: ensuring consistent definitions of business metrics across different systems and teams. The AI-powered semantic layer translates raw data into standardized business language.</p><p>&quot;We use terms like ACV or churn that have specific definitions within our organization,&quot; Motamedi said. &quot;Making sure AI understands those definitions, and then having a standardized layer across organizations, really makes this seamless for enterprises.&quot;</p><p><a href="https://www.salesforce.com/news/stories/mulesoft-agent-fabric-announcement/"><u>MuleSoft Agent Fabric</u></a> tackles what Salesforce calls &quot;agent sprawl&quot; — the proliferation of AI agents across different platforms and vendors within large organizations. The system provides centralized registration, orchestration, and governance for AI agents regardless of where they were built.</p><h2><b>How Salesforce plans to battle Microsoft, Google and Amazon for AI dominance</b></h2><p>Salesforce&#x27;s comprehensive approach to AI infrastructure positions the company in direct competition with <a href="https://www.microsoft.com/en-us/"><u>Microsoft</u></a>, <a href="https://www.google.com/"><u>Google</u></a>, <a href="https://www.amazon.com/"><u>Amazon</u></a>, and <a href="https://www.servicenow.com/"><u>ServiceNow</u></a>, all of which are vying to become the dominant platform for enterprise AI deployment.</p><p>The company&#x27;s strategy relies heavily on integration advantages that come from building AI capabilities into an existing platform used by thousands of enterprises. &quot;The power of the platform&quot; lies in the fact that &quot;all of this is natively into the platform. So these capabilities are just there, and they work and they work seamlessly together,&quot; Motamedi emphasized.</p><p>This integrated approach contrasts with point solutions that require custom integration work. &quot;Some of these point solutions, if you want these things to work together, you got to build those integrations. You got to have developer teams to make that happen,&quot; she noted.</p><p>The company&#x27;s pending $8 billion acquisition of data management company Informatica, expected to close soon, will significantly expand Salesforce&#x27;s capabilities in enterprise metadata management — a critical component for AI accuracy.</p><p>&quot;For the last 26 years, Salesforce has been rooted in our platform approach — we&#x27;ve built the metadata layer from day one,&quot; Motamedi said. &quot;But with Informatica, we&#x27;re going to see metadata across the entire enterprise, and that gives us another layer of accuracy for AI responses.&quot;</p><h2><b>Early enterprise customers reveal the reality of scaling AI in large organizations</b></h2><p>Despite the technical capabilities, Salesforce acknowledges that enterprise AI adoption remains in early stages. The company reports having &quot;over 12,000 live deployments of Agentforce&quot; — its AI agent platform — but Motamedi describes a wide range of organizational readiness.</p><p>&quot;Every company has a mandate right now to figure out how they can incorporate AI,&quot; she said. &quot;We see very interesting ranges from people who are just getting started to people who are like, we&#x27;re going to build like 80 different agents within their organization.&quot;</p><p>Early customer implementations include <a href="https://wa.aaa.com/home"><u>AAA Washington</u></a>, which is using Salesforce&#x27;s unified data foundation to improve member experiences across roadside assistance, insurance, and travel services. <a href="https://www.uchicagomedicine.org/"><u>UChicago Medicine</u></a> is leveraging the platform to ensure reliable patient interactions while enabling healthcare staff to focus on complex, human-centered care.</p><p>The maturity curve for enterprise AI adoption means &quot;it&#x27;s going to take a couple years to see it fully, fully embraced, but we already see the path,&quot; according to Motamedi.</p><h2><b>What Salesforce&#x27;s AI governance push means for the future of enterprise software</b></h2><p>The broader implications of Salesforce&#x27;s strategy extend beyond technical capabilities to fundamental questions about how enterprises will manage AI risk and governance. The company&#x27;s emphasis on built-in security and compliance reflects growing corporate awareness that AI deployment without proper controls can create significant business liability.</p><p>Recent incidents involving AI agents accessing sensitive information or providing unreliable outputs have made corporate leaders more cautious about scaling AI initiatives. Salesforce&#x27;s approach of embedding security directly into AI workflows — including automated threat detection partnerships with <a href="https://www.crowdstrike.com/"><u>CrowdStrike</u></a> and <a href="https://www.okta.com/"><u>Okta</u></a>, and built-in HIPAA compliance for healthcare applications — represents an attempt to address these concerns while accelerating adoption.</p><p>However, market skepticism remains. CNBC&#x27;s Jim Cramer <a href="https://finance.yahoo.com/news/jim-cramer-salesforce-gotta-more-180432550.html"><u>recently noted concerns</u></a> about Salesforce&#x27;s performance despite strong quarterly reports, suggesting that investor expectations for AI-driven growth may be outpacing actual business results.</p><p>The company&#x27;s success will ultimately depend on whether it can help enterprises bridge the gap between AI experimentation and production-scale deployment. As Motamedi framed it: &quot;We really believe that we have a trust layer for enterprise AI with all of these new announcements, and we&#x27;re really helping companies move from cautious pilots to transformative action.&quot;</p><p>Whether that vision becomes reality will depend on Salesforce&#x27;s ability to prove that integrated platforms can solve enterprise AI&#x27;s trust problem better than the patchwork of point solutions most companies rely on today. In an industry where <a href="https://news.ycombinator.com/item?id=41368935"><u>80% of projects fail</u></a>, the company that finally cracks the code on reliable, scalable enterprise AI could reshape how business gets done — or discover that the technical challenges run deeper than any single platform can solve.</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Big Data</category>
            <category>Data management</category>
            <category>Infrastructure</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/48XIfBjEA8DoLNpIystf9B/bd01a15251d9d54f64a7ae8d5d23d1ba/nuneybits_Vector_art_of_layered_data_cloud_12e322ec-5b63-4bec-a9ce-7a255cb1694e.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Slack is giving AI unprecedented access to your workplace conversations]]></title>
            <link>https://venturebeat.com/technology/slack-is-giving-ai-unprecedented-access-to-your-workplace-conversations</link>
            <guid isPermaLink="false">3vwMS0L7i1bmZHzLuwc5LI</guid>
            <pubDate>Wed, 01 Oct 2025 12:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://slack.com/"><u>Slack</u></a> is fundamentally reshaping how artificial intelligence agents access and use enterprise data, launching new platform capabilities that allow developers to tap directly into the rich conversational data flowing through workplace channels — a move that could determine whether Slack or Microsoft Teams becomes the dominant platform for AI-powered work.</p><p>The company announced Wednesday that its new <a href="https://docs.slack.dev/"><u>real-time search API</u></a> and <a href="https://slack.dev/"><u>Model Context Protocol server</u></a> will give third-party developers secure, permission-aware access to Slack&#x27;s vast troves of workplace conversations, messages, and files. The move assumes that conversational data—the informal discussions, decisions, and institutional knowledge that accumulates in workplace chat—will become the fuel that makes AI agents truly useful rather than generic.</p><p>&quot;Agents need more data and real relevance in their answers and actions, and that&#x27;s going to come from context, and that context, frankly, comes from conversations that happen within an enterprise,&quot; Rob Seaman, Slack&#x27;s chief product officer, said in an exclusive interview with VentureBeat. &quot;And the best place for those conversations to happen within an enterprise is Slack.&quot;</p><p>The announcement arrives as enterprise software companies race to embed AI capabilities into their platforms, with mixed results. While tools like <a href="https://copilot.microsoft.com/"><u>Microsoft&#x27;s Copilot</u></a> and <a href="https://gemini.google.com/app"><u>Google&#x27;s Gemini</u></a> have generated significant buzz, adoption has been hampered by AI agents that often provide generic responses disconnected from the specific context of how teams actually work.</p><p>Slack&#x27;s approach represents a different philosophy: rather than building AI features in isolation, the company is positioning itself as the foundational layer where AI agents can access the unstructured conversations that contain the real decision-making context of modern organizations.</p><h2><b>How Slack plans to unlock workplace conversation data for AI agents</b></h2><p>The technical capabilities <a href="https://slack.com/"><u>Slack</u></a> unveiled solve what the company describes as a fundamental problem facing the thousands of companies building AI agents: how to make them useful in the actual flow of work rather than as standalone tools that employees must remember to use.</p><p>The <a href="https://docs.slack.dev/"><u>real-time search API</u></a> allows AI applications to query Slack&#x27;s data on behalf of authenticated users, searching across messages, channels, files, and Slack&#x27;s Canvas and Lists features to surface contextually relevant information in real-time. Unlike traditional APIs that require developers to stitch together multiple endpoints, the new system provides a single, focused way to retrieve information based on keywords or natural language prompts.</p><p>&quot;This avoids the necessity of duplicating Slack data between systems, which enables features like federated search,&quot; Seaman explained. &quot;So it&#x27;s a much more focused, use-case-based way that keeps the data resident in Slack with proper permissions and provides access to it on demand.&quot;</p><p>The <a href="http://slack.dev"><u>Model Context Protocol server</u></a>, built on an open standard developed by Anthropic, standardizes how large language models and AI agents discover and execute tasks within Slack, reducing the complexity developers face when building integrations across multiple enterprise systems.</p><p>Leading AI companies are already building on these capabilities. Anthropic&#x27;s Claude can now search across Slack workspaces to provide context-aware responses grounded in actual team conversations. Google&#x27;s <a href="https://cloud.google.com/products/agentspace?hl=en"><u>Agentspace platform</u></a> uses the <a href="https://docs.slack.dev/"><u>real-time search API</u></a> to create seamless information flows between Slack and Google&#x27;s AI agents. Perplexity Enterprise now grounds its web search capabilities in team discussions, while Dropbox Dash provides real-time insights across both platforms.</p><h2><b>Why enterprise security concerns may not derail Slack&#x27;s AI ambitions</b></h2><p>The platform&#x27;s security architecture addresses what could be a major concern for enterprise customers: ensuring AI agents only access information that users are authorized to see. Slack&#x27;s approach hinges on authenticated access that respects existing permission structures.</p><p>&quot;The primary way is that information is accessed on behalf of the user,&quot; Seaman said. &quot;When one of these agents makes a call back into Slack, the user authenticates to the agent, which then authenticates to Slack using the user&#x27;s credentials.&quot;</p><p>This means AI agents can only access direct messages, private channels, and public channels that the authenticated user already has permission to view. Additionally, Slack has contractually prohibited the use of API responses for training AI models, addressing concerns about sensitive enterprise data being used to improve third-party AI systems.</p><p>The security model becomes particularly important given Slack&#x27;s central position in enterprise workflows. The platform has become the operational backbone for countless organizations, creating vast repositories of sensitive information that include strategic decisions, confidential discussions, and institutional knowledge that require careful access controls.</p><p>For international customers, Slack maintains <a href="https://slack.com/blog/transformation/introducing-data-residency-for-slack"><u>data residency</u></a> capabilities across multiple regions, processing information locally to meet sovereignty requirements. The company&#x27;s <a href="https://slack.com/pricing"><u>Enterprise Plus</u></a> plan includes comprehensive security and compliance features designed for regulated industries.</p><h2><b>Microsoft Teams faces new pressure as Slack embraces AI ecosystem strategy</b></h2><p>The announcement represents Slack&#x27;s latest move in an increasingly intense competition with <a href="https://www.microsoft.com/en-us/microsoft-teams/group-chat-software"><u>Microsoft Teams</u></a>, which has been aggressively adding AI capabilities through its Copilot platform. While both companies are embedding AI throughout their collaboration platforms, they&#x27;re taking markedly different approaches.</p><p>When asked about the competitive dynamics, Seaman emphasized user experience over feature comparison: &quot;People love to use Slack. So they love the actual end user experience of it. They also love to experience their other software in Slack, and so people love approving expense reports in Slack, and they love approving travel requests and creating JIRA tickets, just all from within the flow of their work.&quot;</p><p>Slack&#x27;s strategy appears focused on becoming the integration hub where other software experiences converge, rather than building a comprehensive suite of productivity tools like Microsoft. This approach has already shown results, with the company noting that agentic startups have achieved &quot;10s of 1000s of customers that have it installed in 120 days or less&quot; by building into Slack&#x27;s marketplace.</p><p>The timing also reflects broader market dynamics. Salesforce, which <a href="https://slack.com/blog/news/salesforce-completes-acquisition-of-slack"><u>acquired Slack in 2021</u></a> for $27.7 billion, has been positioning the platform as central to its AI strategy while raising prices across its product portfolio. In June, the company increased <a href="https://slack.com/pricing/businessplus"><u>Slack Business+ pricing</u></a> from $12.50 to $15 per user per month, the second price increase in under 24 months.</p><h2><b>Slack&#x27;s surprising revenue strategy: no fees for AI developers</b></h2><p>Unlike some platform companies that take revenue shares from third-party developers, Slack has chosen not to monetize its AI capabilities through direct fees to partners. Instead, the company&#x27;s revenue model focuses on driving deeper user engagement and retention.</p><p>&quot;We don&#x27;t do a revenue sharing model with our partners,&quot; Seaman said. &quot;The benefit to Slack is that people use more and more of their software within Slack, and users stay engaged on our platform. We want them to have a great experience doing their work in Slack.&quot;</p><p>This approach reflects a broader strategic calculation: that by making Slack the most attractive platform for AI development, the company can increase its value as the central nervous system of enterprise work, justifying higher subscription prices and reducing customer churn.</p><p>The strategy appears to be working. Slack reports that over 1.7 million apps are actively used on its platform each week, with 95% of users saying that using an app in Slack makes those tools more valuable.</p><h2><b>What conversational AI could mean for enterprise productivity</b></h2><p>The announcement signals a potential shift in how enterprise AI capabilities will be deployed and experienced. Rather than employees learning to use separate AI tools for different tasks, Slack&#x27;s vision positions AI agents as conversational teammates accessible through the same interface used for human collaboration.</p><p>&quot;You can imagine a time where we&#x27;re all going to have a series of agents at our disposal working on our behalf,&quot; Seaman said. &quot;They&#x27;re going to need to interrupt you. You&#x27;re going to have to interject and actually change what they&#x27;re doing — maybe redirect them completely. And we think Slack is a perfect place to do that.&quot;</p><p>This conversational approach to AI interaction could address one of the biggest challenges facing enterprise AI adoption: the context-switching costs that reduce productivity when employees must move between multiple specialized AI tools. By centralizing AI interactions within existing communication workflows, Slack aims to reduce the cognitive overhead of working with multiple AI agents.</p><p>The platform&#x27;s focus on conversational data also addresses a critical limitation of current enterprise AI systems. While many AI tools can access structured data from databases and enterprise software, the informal conversations where real decisions are made and institutional knowledge is shared have largely remained inaccessible to AI systems.</p><h2><b>Behind the scenes: how Slack built infrastructure for real-time AI queries</b></h2><p>Behind the scenes, Slack has built technical infrastructure designed to handle the demands of real-time AI queries while maintaining performance for its core messaging capabilities. The system includes rate limits for API calls and restrictions on the volume of data that can be returned in response to queries, ensuring that searches remain fast and targeted rather than attempting to process entire conversation histories.</p><p>&quot;When somebody searches over the real-time search API, we&#x27;re not going to return the entire Slack corpus,&quot; Seaman explained. &quot;It&#x27;s going to be super targeted, ranked, and relevant to that particular query. We&#x27;re doing that so we can basically guarantee the fastest response time possible.&quot;</p><p>For developers, the setup process remains straightforward, requiring only the same authentication and app configuration needed for existing Slack integrations. This low barrier to entry could accelerate adoption among the growing ecosystem of AI startups and enterprise software companies looking to embed conversational AI capabilities.</p><p>The success of Slack&#x27;s AI platform expansion will depend on whether enterprises embrace conversational AI as a natural extension of team communication, or whether they prefer more structured approaches offered by competitors. As enterprise software companies continue racing to embed AI capabilities, the company that best solves the adoption and context problems may emerge as the foundation for AI-powered work.</p><p>But for now, Slack has made its choice clear: in the battle for AI supremacy, the winner won&#x27;t be determined by the most sophisticated algorithms — it&#x27;ll be whoever controls the conversations.</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Big Data</category>
            <category>Enterprise</category>
            <category>Social</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/5k3wlTTbMsmVlMaAJ3XXgj/4e6ae0aad7af803cd31dc3553ee3d854/nuneybits_Vector_art_of_magnifying_glass_over_data_streams_69b8c62e-aa03-4988-82b8-616614764fe2-1.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Perplexity launches massive search API to take on Google’s dominance]]></title>
            <link>https://venturebeat.com/infrastructure/perplexity-launches-massive-search-api-to-take-on-googles-dominance</link>
            <guid isPermaLink="false">1c2p3TNc0oTmCwJwxwjsTP</guid>
            <pubDate>Thu, 25 Sep 2025 11:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.perplexity.ai/"><u>Perplexity AI</u></a> launched a comprehensive <a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>search application programming interface</u></a> on Thursday, giving developers direct access to the same massive web index that powers the startup&#x27;s answer engine and potentially breaking the stranglehold that tech giants have maintained over global search data.</p><p>The <a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>Search API</u></a> poses the most significant challenge yet to Google&#x27;s dominance in providing search infrastructure to developers, offering access to an index spanning hundreds of billions of web pages with real-time updates and AI-optimized results formatting. The move comes as Perplexity positions itself as a disruptive force in the search industry, following its audacious <a href="https://www.cnbc.com/2025/08/12/perplexity-google-chrome-ai.html"><u>$34.5 billion bid for Google&#x27;s Chrome browser</u></a> in August.</p><p>&quot;Legacy search engines have kept developers beholden to their interests, namely favoring commercial intent traffic over helpful content,&quot; said Beejoli Shah, a spokesperson for Perplexity. The company argues that established players have systematically limited developer access to search indexes while newer startups lack the scale to provide meaningful alternatives.</p><h2><b>How Perplexity plans to end Google&#x27;s search data stranglehold</b></h2><p>The launch addresses a critical infrastructure gap that has emerged as artificial intelligence applications proliferate. Developers building AI-powered products have struggled to access high-quality, comprehensive search data without relying on Google&#x27;s increasingly restrictive APIs or Microsoft&#x27;s Bing search infrastructure. Traditional search providers have tightened access controls and frequently discontinued services that developers depended on, forcing many to build inferior products or abandon projects entirely.</p><p><a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>Perplexity&#x27;s API</u></a> differentiates itself through several technical innovations designed specifically for the AI era. The system processes tens of thousands of updates per second, making new content searchable within seconds rather than the hours or days typical of traditional search engines. This real-time capability addresses one of the most persistent problems in search: content staleness.</p><p>The API also implements what Perplexity calls &quot;<a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>sub-document precision</u></a>,&quot; identifying and ranking specific passages within web pages rather than entire documents. This approach aligns with how large language models consume information, providing more targeted and contextually relevant results than conventional search systems that return lists of links.</p><h2><b>Real-time indexing and AI-powered results: the technical edge</b></h2><p>The underlying infrastructure combines keyword and semantic search capabilities, enabling what Perplexity terms &quot;<a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>hybrid retrieval</u></a>.&quot; This approach allows the system to understand complex, conversational queries while maintaining the precision of traditional keyword matching. Results are returned in a structured, citation-rich format specifically designed for integration with AI applications and traditional web services.</p><p>&quot;Instead of just links, Search API surfaces the most relevant snippets from pages and sub-pages, ensuring that users get the most contextual answers possible, with source attribution built-in,&quot; the company explained. This citation system addresses growing concerns about AI applications that provide information without crediting original sources, potentially benefiting content creators who have seen their work reproduced without attribution.</p><p>To support developer adoption, Perplexity has launched a comprehensive <a href="https://www.perplexity.ai/api-platform"><u>API platform</u></a> housing developer consoles and documentation for both its Search and Sonar APIs. The company also released an open-source evaluation framework called &quot;<a href="https://github.com/perplexityai/search_evals/"><u>search_evals</u></a>&quot; that allows developers to benchmark any search API for quality and performance before committing resources.</p><h2><b>From answer engine to tech giant: Perplexity&#x27;s billion-dollar ambitions</b></h2><p>The <a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>Search API</u></a> launch continues Perplexity&#x27;s rapid expansion beyond its core answer engine product. Founded in 2022 by alumni from <a href="https://openai.com/"><u>OpenAI</u></a>, <a href="https://www.meta.com/"><u>Meta</u></a>, and <a href="https://www.quora.com/"><u>Quora</u></a>, the San Francisco-based company has evolved from a simple AI-powered search interface into a comprehensive platform challenging multiple aspects of how people interact with information online.</p><p>Recent moves underscore the company&#x27;s ambitions. In September, Perplexity launched an <a href="https://venturebeat.com/ai/perplexitys-new-ai-agent-wants-to-replace-your-email-habits-for-usd200-per"><u>AI email assistant exclusively for its $200-per-month Max subscribers</u></a>, offering automated email management, meeting scheduling, and response drafting. The company also introduced the Comet browser, built on the Chromium framework with AI features integrated throughout the browsing experience.</p><p>Most notably, Perplexity made headlines in August with its unsolicited<a href="https://www.reuters.com/business/media-telecom/ai-startup-perplexity-makes-bold-345-billion-bid-googles-chrome-browser-2025-08-12/"><u> $34.5 billion offer to acquire Google&#x27;s Chrome browser</u></a>, a bid that exceeded the company&#x27;s own $18 billion valuation at the time. While analysts dismissed the offer as unlikely to succeed, it demonstrated Perplexity&#x27;s willingness to make bold moves in challenging established players.</p><p>The company has attracted significant investor interest, ranking 27th on CNBC&#x27;s <a href="https://www.cnbc.com/cnbc-disruptors/"><u>2025 Disruptor 50</u></a> list. Meta reportedly approached Perplexity about a <a href="https://www.cnbc.com/2025/06/20/meta-perplexity-scale-ai-deal.html"><u>potential acquisition</u></a> earlier this year, though negotiations did not result in a deal. Instead, Meta pursued a $14.3 billion investment in Scale AI, another AI infrastructure company.</p><h2><b>Google&#x27;s antitrust troubles create opening for search challengers</b></h2><p>The timing of Perplexity&#x27;s <a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>Search API</u></a> launch coincides with increasing regulatory pressure on Google&#x27;s search dominance. The Department of Justice has proposed that <a href="https://www.wired.com/story/the-doj-still-wants-google-to-divest-chrome/"><u>Google divest Chrome</u></a> as part of antitrust remedies following a court ruling that found the company maintains an illegal monopoly in internet search. This regulatory environment may create opportunities for alternative search providers to gain market share.</p><p>Industry analysts have valued Google&#x27;s various business units separately, with estimates suggesting Chrome alone could be <a href="https://www.cnbc.com/2025/08/13/google-antitrust-chrome-perplexity-ai-youtube.html"><u>worth $50 billion</u></a> based on its user base and integration with Google&#x27;s advertising ecosystem. YouTube is valued between $271 billion and $550 billion by different analysts, while Google Cloud is estimated at $549 billion to $682 billion.</p><p>Perplexity&#x27;s approach differs fundamentally from Google&#x27;s advertising-driven model. By charging developers directly for API access rather than monetizing through advertising, the company avoids some of the conflicts of interest that critics argue have degraded search quality. This model aligns Perplexity&#x27;s incentives with providing accurate, helpful information rather than driving commercial traffic.</p><h2><b>Why even AI-powered search still needs human oversight</b></h2><p>Despite its technical innovations, Perplexity&#x27;s <a href="https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api"><u>Search API</u></a> faces significant challenges in competing with Google&#x27;s two-decade head start in search technology. Google processes billions of queries daily and has refined its algorithms through massive scale and continuous user feedback. The company&#x27;s infrastructure spans the globe with sophisticated caching, content delivery networks, and specialized hardware optimized for search workloads.</p><p>Perplexity acknowledges that its AI-powered approach has limitations requiring human oversight. AI-generated summaries and recommendations need manual verification for accuracy and relevance, and the system may not always surface the most appropriate results for traditional keyword searches as effectively as Google&#x27;s mature algorithms.</p><p>The company also faces ongoing legal challenges. <a href="https://www.reuters.com/legal/litigation/encyclopedia-britannica-sues-perplexity-over-ai-answer-engine-2025-09-11/"><u>Encyclopedia Britannica sued Perplexity</u></a> in September over its AI answer engine, alleging copyright infringement and unfair competition. These legal battles highlight broader questions about how AI companies can use copyrighted content to train models and generate responses.</p><h2><b>What Perplexity&#x27;s API launch means for the future of search</b></h2><p>For the first time since Google&#x27;s rise to dominance, developers have access to a genuinely competitive alternative for global-scale search data. The success or failure of Perplexity&#x27;s gambit will likely determine whether the next generation of AI applications will be built on diverse, competitive infrastructure or remain dependent on a handful of tech giants.</p><p>Early adoption by enterprise customers could validate Perplexity&#x27;s approach and encourage other companies to challenge established search providers. The company&#x27;s emphasis on citation and source attribution may prove particularly appealing to businesses requiring verifiable information sources for AI applications.</p><p>The broader implications extend beyond search itself. If Perplexity succeeds in democratizing access to comprehensive web data, it could accelerate innovation in AI applications, reduce development costs for startups, and create new possibilities for how people discover and interact with information online.</p><p>As artificial intelligence reshapes the digital landscape, Perplexity&#x27;s bold challenge to Google&#x27;s search monopoly raises a fundamental question: In an AI-driven future, who will control the keys to the world&#x27;s information — and will anyone be powerful enough to take them away?</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Big Data</category>
            <category>Data management</category>
            <category>Infrastructure</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/TJzhOSjHWACLlqpbvzUQJ/028aabb0e8ca7a5c1f9d8b0e3d02d5ec/nuneybits_Vector_art_of_magnifying_glass_pixel_d2f4bcc0-7d79-4671-b1e6-13421527978a.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Brex turns accounting into a one-click setup with Puzzle integration for startups]]></title>
            <link>https://venturebeat.com/technology/brex-turns-accounting-into-a-one-click-setup-with-puzzle-integration-for</link>
            <guid isPermaLink="false">35UGSKn0aChH5nHsWoUwyf</guid>
            <pubDate>Wed, 24 Sep 2025 20:52:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.brex.com/"><u>Brex Inc.</u></a> and artificial intelligence accounting platform <a href="https://puzzle.io/"><u>Puzzle</u></a> announced Tuesday <a href="https://puzzle.io/blog/brex-puzzle-set-up-your-accounting-in-one-click"><u>a partnership</u></a> that reduces startup accounting setup from a weeks-long process to a single click, addressing what executives describe as a critical but often overlooked barrier to startup success.</p><p>The integration, <a href="https://www.brex.com/one-click-accounting"><u>available immediately</u></a> to Brex&#x27;s more than 30,000 customers, allows founders to establish complete accounting systems directly within their existing Brex dashboard without switching platforms or manual data entry. The partnership marks Brex&#x27;s evolution from a corporate credit card provider to a comprehensive financial operating system for growing businesses.</p><p>&quot;What we saw time and time again is that founders don&#x27;t connect an ERP—not because they don&#x27;t want to, but because they don&#x27;t have one,&quot; said Jason Mok, VP and General Manager at Brex, in an exclusive interview with VentureBeat. &quot;And the reason they don&#x27;t have one is because it&#x27;s such a laborious process to get an ERP, set it up, create your general ledger accounts, write rules, and so on.&quot;</p><p>The partnership addresses a persistent pain point in the startup ecosystem: while founders can quickly establish banking relationships and obtain corporate credit cards, setting up proper accounting systems has remained a complex, expensive process that many defer until it becomes critical for fundraising or compliance.</p><h2><b>How startups have struggled with expensive, time-consuming accounting setup for decades</b></h2><p>Traditionally, startup accounting setup required founders to interview multiple bookkeepers, navigate sales processes, obtain quotes, and grant access to financial credentials across various platforms — a process that typically took four to six weeks and cost upward of $5,000 monthly just to get started, according to Puzzle CEO Sasha Orloff.</p><p>&quot;There are two conversations I have almost daily,&quot; Orloff explained in a sit-down interview with VentureBeat. &quot;One is with first-time founders who ask, &#x27;Why is accounting important?&#x27; The other is with second-time founders who say, &#x27;I get it. Set me up.&#x27;&quot;</p><p>The timing problem proves particularly acute because accounting becomes essential precisely when startups need it most urgently — during fundraising rounds, tax season, or acquisition discussions.</p><p>&quot;Bad accounting will lower your valuation, or completely derail a deal,&quot; Orloff said. &quot;By the time you need to clean books, it&#x27;s already too late.&quot;</p><h2><b>Inside the AI-powered technology that makes instant accounting integration possible</b></h2><p>The technical foundation for the partnership rests on <a href="https://www.brex.com/product/api"><u>APIs that Brex developed</u></a> specifically to enable such integrations — infrastructure that previously didn&#x27;t exist in traditional banking.</p><p>&quot;Brex wrote the API to enable this to happen,&quot; Orloff explained. &quot;There wasn&#x27;t an API for credit cards, there wasn&#x27;t an API for banks. There wasn&#x27;t an API for Treasury. There wasn&#x27;t an API for reimbursements. There wasn&#x27;t an API for invoicing. This just didn&#x27;t exist.&quot;</p><p>When a Brex customer clicks the <a href="https://www.brex.com/one-click-accounting"><u>accounting tab</u></a> in their dashboard and selects Puzzle, the system automatically creates a Puzzle account, maps expense categories to the appropriate general ledger accounts, and begins syncing transaction data in real-time. The integration includes metadata like receipts, memos, and transaction context that enables AI-powered categorization and compliance checking.</p><p>Puzzle&#x27;s AI system can provide what Orloff calls different &quot;modes&quot; of financial analysis — including &quot;Steve Jobs mode&quot; for direct feedback, &quot;VC mode&quot; for investor presentations, or &quot;friendly mode&quot; for positive reinforcement during challenging periods.</p><p>&quot;In the privacy of your own browser, you can ask, &#x27;Tell me what I&#x27;m doing well and what I&#x27;m doing poorly,&#x27;&quot; Orloff said. &quot;There&#x27;s this emotional fear when you&#x27;re a founder—&#x27;I don&#x27;t really know accounting, I don&#x27;t know how to speak finance&#x27;—but now we can deliver those insights to you through AI.&quot;</p><h2><b>Why both companies see founder success as the key to their own growth strategies</b></h2><p>The partnership reflects aligned business models where both companies benefit from startup success. <a href="https://www.brex.com/"><u>Brex</u></a> generates more revenue as companies grow and spend more, while <a href="https://puzzle.io/"><u>Puzzle&#x27;s</u></a> automated accounting becomes more valuable as transaction volumes increase.</p><p>Mok, whose career spans Silicon Valley Bank, Andreessen Horowitz, and now Brex, emphasized the strategic importance of solving founder problems before they become critical.</p><p>&quot;We want to solve a founder&#x27;s problems today, and we want to solve problems before they happen,&quot; Mok said. &quot;That will earn loyalty, and it takes a lot of trust, ambition, and foresight on our end to say, &#x27;I&#x27;m going to solve this problem before they even realize it&#x27;s a problem.&#x27;&quot;</p><p>The integration launched to early access users last week, with Mok reporting 21-22 signups within the first 24 hours through organic adoption alone.</p><h2><b>How Brex is building a financial operating system to compete with traditional banking</b></h2><p>The partnership comes as Brex continues expanding beyond its corporate credit card origins into a comprehensive financial platform. Recent partnerships include travel booking with <a href="https://www.brex.com/brexpay"><u>Navan</u></a> and procurement with <a href="https://www.brex.com/journal/press/brex-for-zip"><u>Zip</u></a>, signaling the company&#x27;s broader ambitions to become what Mok calls a &quot;financial operating system.&quot;</p><p>Unlike traditional accounting software that requires companies to migrate between platforms as they grow — from Excel to QuickBooks to NetSuite — both <a href="https://www.brex.com/"><u>Brex</u></a> and <a href="https://puzzle.io/"><u>Puzzle</u></a> are designed to scale with companies from incorporation through significant revenue milestones. Puzzle currently serves companies generating up to $35 million in annual revenue.</p><p>The accounting software market has long been characterized by fragmentation, with different solutions targeting different business stages. This partnership attempts to solve what Orloff describes as an industry anomaly.</p><p>&quot;When you look at the accounting market, that&#x27;s just not a thing—there&#x27;s no option to scale from startup to large company on a single platform,&quot; Orloff said. &quot;If you&#x27;re a very small company, you might use Excel or Xero. As you grow into a real business, you move to QuickBooks. When you become successful, you switch to NetSuite.”</p><h2><b>What real-time financial data could mean for startup success rates and venture capital</b></h2><p>The broader implications extend beyond operational convenience. According to both executives, proper accounting setup from day one could fundamentally improve startup success rates by providing founders with real-time financial visibility instead of monthly summaries.</p><p>&quot;What if we could prove together that this partnership helps make startups more successful, just the function of having daily understanding of your financial health?&quot; Orloff said. &quot;What if we could teach you through software and AI and design that we can help make your business better?&quot;</p><p>The partnership also reflects evolving expectations in the fintech sector, where companies increasingly seek to reduce friction through integration rather than building every capability in-house.</p><p>&quot;The thing we&#x27;ve learned at Brex over time is lean into the things you&#x27;re really good at, and then enable others and empower others, and partner with others where they have their strong suit,&quot; Mok explained.</p><p>For the venture capital ecosystem, the development could prove significant if it enables more accurate financial reporting from portfolio companies and reduces the due diligence burden during funding rounds.</p><p>The integration reflects a broader bet that removing friction from essential business processes creates competitive advantages that extend far beyond individual transactions. As Mok put it, using a passport metaphor: &quot;It&#x27;s something you know you eventually need. You may or may not need it right now. It&#x27;s a pain in the ass to get done, and you can build so much gratitude upfront if you remove that friction.&quot;</p><p>For founders who have experienced the anxiety of managing finances on spreadsheets while trying to build companies, automated, AI-powered accounting from day one is more than operational efficiency—it&#x27;s a fundamental shift in how startups approach financial management from inception.</p><p>&quot;It was so fucking hard when I started my first two companies,&quot; Orloff said, &quot;and now it&#x27;s literally not — if you can&#x27;t click on a website, like, click a screen like, that&#x27;s on you.&quot;</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Big Data</category>
            <category>Fintech</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/7E5mHyvMDbqSykVkM3mDs5/0cc1962b651b192dd3bf24b3f6809c09/nuneybits_Vector_art_of_a_credit_card_made_of_puzzle_pieces_tra_02ae2bf5-a70d-4dd1-acbd-1e13745c8d14.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The $1 trillion AI problem: Why Snowflake, Tableau and BlackRock are giving away their data secrets]]></title>
            <link>https://venturebeat.com/infrastructure/the-usd1-trillion-ai-problem-why-snowflake-tableau-and-blackrock-are-giving</link>
            <guid isPermaLink="false">5uzNNzEuqQL7lvIOK2g0QQ</guid>
            <pubDate>Tue, 23 Sep 2025 12:59:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.snowflake.com/en/"><u>Snowflake</u></a>, <a href="https://www.salesforce.com/"><u>Salesforce</u></a>, <a href="https://www.getdbt.com/"><u>dbt Labs</u></a> and more than a dozen other technology companies announced Tuesday they will create a universal standard for how business data is defined and shared across platforms — solving what executives call AI&#x27;s most fundamental bottleneck.</p><p>The <a href="https://www.snowflake.com/en/blog/open-semantic-interchange-ai-standard/"><u>Open Semantic Interchange</u></a> (OSI) initiative brings together fierce competitors who have concluded that inconsistent data definitions across enterprise systems block AI scalability. The effort includes backing from <a href="https://www.blackrock.com/us/individual"><u>BlackRock</u></a> and participation from companies including <a href="https://www.alation.com/"><u>Alation</u></a>, <a href="https://atlan.com/"><u>Atlan</u></a>, <a href="https://blueyonder.com/"><u>Blue Yonder</u></a>, <a href="https://cube.dev/"><u>Cube</u></a>, <a href="https://hex.tech/"><u>Hex</u></a>, <a href="https://honeydew.ai/"><u>Honeydew</u></a>, <a href="https://mistral.ai/"><u>Mistral AI</u></a>, <a href="https://omni.co/"><u>Omni</u></a>, <a href="https://relational.ai/"><u>RelationalAI</u></a>, <a href="https://www.selectstar.com/"><u>Select Star</u></a>, <a href="https://www.sigmacomputing.com/"><u>Sigma</u></a>, and <a href="https://www.thoughtspot.com/"><u>ThoughtSpot</u></a>. Together, they will establish the first vendor-neutral specification for semantic metadata — a Rosetta Stone for business data.</p><p>&quot;We&#x27;re not in the business of locking data in, we&#x27;re in the business of making it accessible and valuable,&quot; Christian Kleinerman, Snowflake&#x27;s executive vice president of product, told VentureBeat in an exclusive interview. &quot;The biggest barrier our customers face when it comes to ROI from AI isn&#x27;t a competitor — it&#x27;s data fragmentation.&quot;</p><h2><b>Every AI model fails when sales and marketing can&#x27;t agree what &#x27;customer&#x27; means</b></h2><p>The initiative tackles a problem that has plagued enterprises since the dawn of business computing but now threatens AI adoption: Every software system defines business metrics differently. A retailer&#x27;s sales platform might classify an &quot;active customer&quot; as someone who purchased within 90 days, while its marketing system defines the same term as anyone who engaged with content in the past month. AI models trained on both systems produce unreliable predictions and destroy trust in AI-generated insights.</p><p>&quot;Picture a business training AI models to predict something like customer churn,&quot; Kleinerman explained. &quot;When an AI model pulls data from both [systems with different definitions], it&#x27;s going to end up with conflicting definitions. That inconsistency makes AI less accurate and harder to scale.&quot;</p><p>This semantic chaos costs enterprises millions. Data and AI teams spend weeks reconciling conflicting definitions and reformatting data before AI projects can begin — driving up operational costs and delaying time-to-market for AI applications. Many enterprises find the promise of AI as a productivity multiplier destroyed by the manual labor required to prepare inconsistent data.</p><h2><b>Tableau and Snowflake put competition aside to fix the data ecosystem</b></h2><p>The collaboration breaks traditional competitive dynamics in enterprise software. <a href="https://www.tableau.com/"><u>Tableau</u></a>, Salesforce&#x27;s business intelligence division that competes directly with several OSI participants, co-leads the initiative alongside Snowflake.</p><p>&quot;This initiative is transformative because it&#x27;s not about one company owning the standard—it&#x27;s about the industry coming together,&quot; Southard Jones, Tableau&#x27;s chief product officer, told VentureBeat in an exclusive interview. &quot;The future of AI depends on trust — and trust starts with consistent, reliable data.&quot;</p><p>Jones revealed that Tableau will contribute its blueprint for a vendor-neutral semantic layer, built on decades of experience creating business intelligence tools. &quot;Our work has always been about giving data clear business meaning — defining metrics, business logic, and context in a way that people across the enterprise can trust. With OSI, we&#x27;re taking that knowledge and codifying it into an open standard.&quot;</p><p>The decision to pursue an open, collaborative approach acknowledges that proprietary semantic standards have failed. &quot;What makes it a first of its kind is its focus on SQL-based analytical models and its inclusion of AI-specific metadata, such as custom instructions and synonyms,&quot; Kleinerman noted. Existing metadata standards like RDF and OWL lack the compilation engines necessary for modern AI applications.</p><h2><b>The technical blueprint promises immediate compatibility with existing tools</b></h2><p>OSI targets the semantic layer — the business meaning of data rather than just its technical properties. The specification uses YAML file definitions, enabling immediate compatibility with existing tools like dbt&#x27;s <a href="https://www.getdbt.com/blog/semantic-layer-introduction"><u>Semantic Layer</u></a>.</p><p>&quot;Our support for this will be almost-immediate,&quot; Ryan Segar, dbt Labs&#x27; chief product officer, told VentureBeat. &quot;Data and analytics engineers will now be able to work with the confidence that their work will be leverageable across the data ecosystem. Re-work and double work will be a thing of the past.&quot;</p><p>The standard includes AI-specific features such as natural language synonyms and business terms. &quot;Today, AI models are often forced to infer relationships from raw metadata, which can lead to misinterpretations and inaccurate outputs,&quot; explained Francois Lopitaux, ThoughtSpot&#x27;s senior vice president of product management, in an exclusive interview. &quot;By providing a universal, open standard, the OSI will give AI agents—including our own Spotter—a common language to understand business context.&quot;</p><h2><b>Major enterprises demand solutions as AI investments stall on bad data</b></h2><p>Enterprise demand for AI capabilities drives the urgency behind OSI. Snowflake reported that nearly <a href="https://www.snowflake.com/en/news/press-releases/snowflake-reports-financial-results-for-the-second-quarter-of-fiscal-2026/"><u>half of new customers in Q2 fiscal 2026</u></a> chose the platform for AI capabilities, with over 6,100 customers using its AI offerings weekly. The company <a href="https://www.snowflake.com/en/news/press-releases/snowflake-reports-financial-results-for-the-second-quarter-of-fiscal-2026/"><u>surpassed $1 billion</u></a> in quarterly revenue for the first time in May, driven largely by AI-related demand.</p><p>A dedicated partner taskforce has formed to deliver the first OSI specification, though executives declined to provide a specific timeline. &quot;Initial customer response to OSI has been overwhelmingly positive,&quot; Kleinerman said, noting strong interest from organizations wanting early adoption.</p><p><a href="https://www.blackrock.com/"><u>BlackRock</u></a> sees immediate applications for the standard in financial services. &quot;The Aladdin platform unifies the investment management process through a common data language across public and private markets,&quot; said Diwakar Goel, BlackRock&#x27;s global head of Aladdin Data. &quot;We are excited to be part of the Open Semantic Interchange to help establish a common, vendor-neutral specification that will not only streamline data exchange but also accelerate the adoption of AI and business intelligence applications across the financial industry.&quot;</p><h2><b>Standardized data definitions will intensify rather than reduce competition</b></h2><p>The initiative changes how software companies will compete. Executives argue that standardization will intensify competition by shifting the battleground from data definitions to innovation in user experience and AI capabilities.</p><p>&quot;Standardization isn&#x27;t a commoditizer — it&#x27;s a catalyst,&quot; Jones argued. &quot;Think of it like a standard electrical outlet: the outlet itself isn&#x27;t the innovation, it&#x27;s what you plug into it. Our focus is on being the most intelligent, intuitive, and powerful &#x27;appliance&#x27; you can connect to your data.&quot;</p><p>Tableau plans to accelerate development of what Jones calls &quot;agentic analytics&quot;—AI agents that surface context, highlight opportunities, flag risks, and suggest next steps rather than just reporting numbers. &quot;Semantic definitions transform AI agents from static tools into analytical partners,&quot; he said.</p><p>ThoughtSpot&#x27;s Lopitaux agreed: &quot;While OSI will set a vendor-agnostic industry standard to semantic layers, we will continue to compete on product innovation, user experience across our entire platform, and delivering unprecedented customer value.&quot;</p><h2><b>The industry bets its future on cooperation over control</b></h2><p>OSI&#x27;s success depends on maintaining vendor-neutral governance — a challenge given the participating companies&#x27; varying market positions and strategic interests. &quot;The whole point of OSI is that no single vendor controls it,&quot; Kleinerman emphasized. &quot;Every member is responsible for maintaining their own mappings and integrations, and the value comes from the shared framework, not from any one company&#x27;s implementation.&quot;</p><p>Enterprise customers stand to gain the most: faster AI deployment, greater accuracy, and elimination of manual data reconciliation costs. Companies can preserve existing investments in semantic models while adopting best-of-breed technologies without sacrificing consistency.</p><p>&quot;When semantics are available everywhere, from anywhere, the place where they &#x27;live&#x27; becomes less relevant,&quot; noted dbt Labs&#x27; Segar. &quot;Built anywhere, leveraged everywhere.&quot;</p><p>The technology industry has decided that AI&#x27;s promise demands an unusual sacrifice: giving up proprietary control of how business data gets defined. The companies betting billions on AI have concluded that owning a piece of a working system beats controlling all of a broken one.</p><p>&quot;We encourage and welcome more companies to join,&quot; Kleinerman said, &quot;because the more perspectives at the table, the stronger and more neutral the standard becomes.&quot;</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Big Data</category>
            <category>Enterprise</category>
            <category>Infrastructure</category>
            <category>Software</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/2eS4VfoIOmGMSqzBY952lz/4aee7d7635002d50f41173ecde2d1b0d/nuneybits_Vector_art_of_a_global_alliance_open_source_912a701f-73b6-47ba-b440-341a71779076.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
    </channel>
</rss>