<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Virtual Comms &amp; Collab | VentureBeat</title>
        <link>https://venturebeat.com/category/virtual/feed/</link>
        <description>Transformative tech coverage that matters</description>
        <lastBuildDate>Sat, 04 Apr 2026 07:09:33 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>Copyright 2026, VentureBeat</copyright>
        <item>
            <title><![CDATA[GenLayer launches a new method to incentivize people to market your brand using AI and blockchain]]></title>
            <link>https://venturebeat.com/business/genlayer-launches-a-new-method-to-incentivize-people-to-market-your-brand-using-ai-and-blockchain</link>
            <guid isPermaLink="false">wp-3012657</guid>
            <pubDate>Thu, 19 Jun 2025 10:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.genlayer.com/">GenLayer</a>, a startup building <a href="https://venturebeat.com/ai/genlayer-offers-novel-approach-for-ai-agent-transactions-getting-multiple-llms-to-vote-on-a-suitable-contract/">decentralized legal infrastructure for AI and machine agents</a>, has launched its first incentivized testnet, dubbed Asimov. </p><p>This marks the initial rollout of its multi-phase validator onboarding and technology validation initiative as the company moves closer to mainnet deployment. </p><p>The testnet introduces what GenLayer calls the first Intelligent Blockchain, powered by AI models and designed to resolve subjective decisions typically outside the scope of traditional deterministic blockchains.</p><p>&quot;Our narrative is that as we enter a world of AI agents—fast and smart—we need a new legal system because the current one is fragmented, slow, and expensive,&quot; said GenLayer CEO and co-founder Albert Castellana, adding that GenLayer &quot;offers a synthetic jurisdiction: a legal system for machines.”</p><h2>Combining the best of blockchain and AI</h2><p>Asimov is the first of three sequential testnets in GenLayer’s roadmap, to be followed by Bradbury and Clark. </p><p>The company aims to progressively test and scale its “Optimistic Democracy” consensus mechanism. Unlike conventional blockchain validators that simply execute code, GenLayer validators are paired with large language models (LLMs), enabling them to evaluate off-chain data and make subjective decisions—such as determining whether submitted content meets campaign requirements or whether a smart contract’s conditions have been fairly fulfilled.</p><p>GenLayer positions this model as essential infrastructure for the coming age of AI agents and machine-to-machine transactions. </p><p>According to Castellana, Asimov’s launch is both a stress test and a signal of technical maturity to developers and partners.</p><h2>Professional validators and developer ecosystem</h2><p>The validator program for Asimov targets seasoned blockchain infrastructure operators. Selected participants will earn rewards for testing consensus logic, transaction handling, and model coordination. GenLayer has already onboarded dozens of validators, with more in the pipeline. Participation requires a full-time commitment during test phases.</p><p>To support builders, GenLayer is releasing a comprehensive developer stack, including the GenLayer Studio, Wallet, Blockchain Explorer, and GS Library (a Python toolkit). The testnet is also paired with a grant program to encourage early development and experimentation ahead of mainnet launch.</p><h2>Rally-ing</h2><p>Coinciding with Asimov is the beta release of Rally, a decentralized marketing protocol that automates influencer and community incentive campaigns. Using AI-powered validators, Rally evaluates submitted content — such as social media posts — against campaign rules embedded in smart contracts.</p><p>“Rally is our first protocol built on GenLayer,&quot; Castellana told VentureBeat. &quot;It autonomously evaluates community-created content and determines compensation, opening participation beyond influencers to anyone.”</p><p>Brands define guidelines (e.g., hashtags, tone, originality), deposit funds, and let the protocol autonomously determine payouts. </p><p>This setup avoids manual negotiation and performance disputes common in traditional influencer programs. Content creators, in turn, receive transparent, on-chain compensation if their submissions meet preset criteria.</p><p>&quot;In the future, many influencers will be AI agents seeking opportunities to earn—this system accommodates that evolution,&quot; Castellana added. </p><p>Rally operates independently of GenLayer’s core team and will eventually be governed by a DAO. From each campaign pool, 1% goes to the Rally DAO, while 10% of protocol fees are allocated to the developers of participating applications.</p><p>&quot;The way I see GenLayer is like a toy factory—creating new tools and mechanisms you can’t find anywhere else,&quot; the CEO added. &quot;Rally is just one example of what’s possible.&quot;</p><h2>How it benefits enterprises </h2><p>For technical decision makers — including brand managers, growth marketers, and digital campaign leads— GenLayer and Rally offer the ability to automate and decentralize campaign execution and quality control. Instead of manually managing influencer relationships, approving content, and disputing post-campaign performance, teams can deploy smart contracts that use LLMs to judge submissions against predefined standards.</p><p>This approach significantly reduces operational overhead, enables real-time feedback and rewards, and ensures fairness and transparency throughout the campaign lifecycle. Additionally, the use of AI agents allows for scalable campaign management across thousands of potential content contributors—human or automated—without additional headcount or vendor friction.</p><p>For enterprises that regularly invest in brand visibility, product launches, or community engagement, Rally could streamline marketing operations while offering auditable proof of campaign performance. Combined with GenLayer’s broader infrastructure, brands also gain access to AI-driven decision systems for everything from grants disbursement to smart contract enforcement, potentially transforming legal and operational workflows in Web3 contexts.</p><h2>Infrastructure for the AI-native economy</h2><p>GenLayer’s architecture is supported by technical partners including ZKSync (for rollup-based scalability), Heurist (for decentralized model hosting), Atoma Network (for privacy-preserving execution), and Caldera. These integrations ensure that the platform remains performant, secure, and aligned with broader Ethereum-based ecosystems.</p><p>The company has raised $7.5 million in seed funding from investors such as North Island Ventures, Arrington Capital, ZK Ventures, and Arthur Hayes’ Maelstrom. GenLayer sees its protocol as the foundation of a synthetic, global jurisdiction—an autonomous legal layer for the AI economy, capable of settling disputes at machine speed with greater accessibility than traditional legal systems.</p><h2>What&#x27;s next?</h2><p>Following Asimov, the Bradbury and Clark testnets will introduce validator-level LLM configuration, production-grade inference tuning, and autonomous network operations. Each phase is designed to validate system components ahead of a mainnet launch planned for later this year.</p><p>GenLayer is actively seeking professional validators and developers to participate in the testnet. Those interested can apply via the company’s website. </p><p>With applications like Rally already live in beta, GenLayer presents a new category of intelligent blockchain infrastructure — combining AI decision-making with decentralized governance to unlock more autonomous, transparent, and scalable systems for enterprise and community users alike</p>]]></description>
            <author>carl.franzen@venturebeat.com (Carl Franzen)</author>
            <category>Business</category>
            <category>Technology</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/26HLExgQQ4crG21njsMOr4/460f1a71538a8a063eaed26a43c03238/cfr0z3n_pop_art_screenprint_close_up_giant_chain_link_covered_w_5f691bf4-b463-4430-84d0-c2c6e8cee3b6.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Google’s 'world-model' bet: building the AI operating layer before Microsoft captures the UI]]></title>
            <link>https://venturebeat.com/business/googles-world-model-bet-building-the-ai-operating-layer-before-microsoft-captures-the-ui</link>
            <guid isPermaLink="false">wp-3008900</guid>
            <pubDate>Sun, 25 May 2025 17:42:05 GMT</pubDate>
            <description><![CDATA[<p>After three hours at <a href="https://venturebeat.com/?s=i%2Fo">Google’s I/O 2025</a> event last week in Silicon Valley, it became increasingly clear: <a href="https://www.google.com/">Google</a> is rallying its formidable AI efforts – prominently branded under the Gemini name but encompassing a diverse range of underlying model architectures and research – with laser focus. It is <a href="https://venturebeat.com/ai/google-just-leapfrogged-every-competitor-with-mind-blowing-ai-that-can-think-deeper-shop-smarter-and-create-videos-with-dialogue/">releasing a slew of innovations and technologies</a> around it, then integrating them into products at a breathtaking pace.</p><p>Beyond headline-grabbing features, Google laid out a bolder ambition: an operating system for the AI age – not the disk-booting kind, but a logic layer every app could tap – a “world model” meant to power a universal assistant that understands our physical surroundings, and reasons and acts on our behalf. It&#x27;s a strategic offensive that many observers may have missed amid the bamboozlement of features. </p><p>On one hand, it’s a high-stakes strategy to leapfrog entrenched competitors. But on the other hand, as Google pours billions into this moonshot, a critical question looms: Can Google’s <a href="https://venturebeat.com/ai/from-catch-up-to-catch-us-how-google-quietly-took-the-lead-in-enterprise-ai/">brilliance in AI research and technology</a> translate into products faster than its rivals, whose edge has its own brilliance: packaging AI into <a href="https://venturebeat.com/ai/microsoft-announces-over-50-ai-tools-to-build-the-agentic-web-at-build-2025/">immediately accessible and commercially potent products</a>? Can Google out-maneuver a laser-focused Microsoft, fend off OpenAI&#x27;s vertical hardware dreams,  and, crucially, keep its own search empire alive in the disruptive currents of AI?</p><p>Google is already pursuing this future on a dizzying scale. Pichai told I/O that the company now processes 480 trillion tokens a month – 50× more than a year ago – and almost 5x more than the 100 trillion tokens a month that Microsoft’s Satya Nadella said his company processed. This momentum is also reflected in developer adoption. Pichai says that over 7 million developers are now building with the Gemini API, representing a five-fold increase since the last I/O. At the same time, Gemini usage on Vertex AI has surged more than 40 times. And unit costs keep falling as Gemini 2.5 models and the Ironwood TPU squeeze more performance from each watt and dollar. <b>AI Mode</b> (rolling out in the U.S.) and AI Overviews (already serving 1.5 billion users monthly) are the live test beds where Google tunes latency, quality and future ad formats as it shifts search into an AI-first era.</p><p><sub>Source: Google I/O 2025</sub></p><p>Google’s doubling-down on what it calls <b>“</b>a world model” – an AI it aims to imbue with a deep understanding of real-world dynamics – and with it a vision for a universal assistant – one powered by Google, and not other companies – creates another big tension: How much control does Google want over this all-knowing assistant, built upon its crown jewel of search? Does it primarily want to leverage it first for itself, to save its $200 billion search business that depends on owning the starting point and avoiding disruption by OpenAI? Or will Google fully open its foundational AI for other developers and companies to leverage – another segment representing a significant portion of its business, engaging over 20 million developers, <a href="https://www.slashdata.co/post/google-has-the-leading-developer-program-but-amazon-is-catching-up">more than any other company</a>? </p><p>It has sometimes stopped short of a radical focus on building these core products <i>for others</i> with the same clarity as its nemesis, Microsoft. That’s because it keeps a lot of core functionality reserved for its cherished search engine. That said, Google is making significant efforts to provide developer access wherever possible. </p><p>A telling example is <b>Project Mariner</b>. Google could have embedded the agentic browser-automation features directly inside Chrome, giving consumers an immediate showcase under Google’s full control. However, Google said Mariner’s computer-use capabilities would be released via the Gemini API more broadly “this summer.” This signals that external access is coming for any rival that wants comparable automation. In fact, Google said partners Automation Anywhere and UiPath were already building with it.</p><h2>Google&#x27;s grand design: the &#x27;world model&#x27; and universal assistant</h2><p>The clearest articulation of Google&#x27;s grand design came from Demis Hassabis, CEO of Google DeepMind, during the I/O keynote. He stated Google continued to “double down” on efforts towards artificial general intelligence (AGI). While Gemini was already &quot;the best multimodal model,&quot; according to Hassabis, he <a href="https://www.youtube.com/watch?v=o8NiE3XMPrM">explained</a>, Google is working hard to &quot;extend it to become what we call a world model. That is a model that can make plans and imagine new experiences by simulating aspects of the world, just like the brain does.&quot; </p><p>This concept of &quot;a world model,&quot; as articulated by Hassabis, is about creating AI that learns the underlying principles of how the world works – simulating cause and effect, understanding intuitive physics, and ultimately learning by observing, much like a human does. An early, perhaps easily overlooked by those not steeped in foundational AI research, yet significant indicator of this direction is Google DeepMind&#x27;s <a href="https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/">work on models like <b>Genie 2</b></a>. This research shows how to generate interactive, two-dimensional game environments and playable worlds from varied prompts like images or text. It offers a glimpse at an AI that can simulate and understand dynamic systems.</p><p>Hassabis has developed this concept of a “world model” and its manifestation as a &quot;universal AI assistant&quot; in several talks since late 2024, and it was presented at I/O most comprehensively – with CEO Sundar Pichai and Gemini lead Josh Woodward echoing the vision on the same stage. (While other AI leaders, including Microsoft’s Satya Nadella, OpenAI’s Sam Altman and xAI’s Elon Musk have all discussed ‘world models.&#x27; Google uniquely and most comprehensively ties this foundational concept to its near-term strategic thrust: the &#x27;universal AI assistant.&#x27;)</p><p>Speaking about the Gemini app, Google’s equivalent to OpenAI’s ChatGPT, Hassabis declared, &quot;This is our ultimate vision for the Gemini app, to transform it into a universal AI assistant, an AI that&#x27;s personal, proactive and powerful, and one of our key milestones on the road to AGI.” </p><p>This vision was made tangible through I/O demonstrations. Google demoed a <a href="https://www.youtube.com/watch?v=A0VttaLy4sU&amp;t=4s">new app called <b>Flow</b></a> – a drag-and-drop filmmaking canvas that preserves character and camera consistency – that leverages Veo 3, the new model that layers physics-aware video and native audio. To Hassabis, that pairing is early proof that ‘world-model understanding is already leaking into creative tooling.’ For robotics, he separately highlighted the fine-tuned Gemini Robotics model, arguing that ‘AI systems will need world models to operate effectively.”</p><p>Pichai <a href="https://www.youtube.com/watch?v=o8NiE3XMPrM">reinforced</a> this, citing Project Astra,<b> </b>which &quot;explores the future capabilities of a universal AI assistant that can understand the world around you.&quot; These Astra capabilities, like live video understanding and screen sharing, are now integrated into <b>Gemini Live</b>. Woodward, who leads Google Labs and the Gemini App, detailed the app&#x27;s goal to be the &quot;most personal, proactive, and powerful AI assistant.&quot; He showcased how &quot;personal context&quot; (connecting search history, and soon Gmail/Calendar) enables Gemini to anticipate needs, like providing personalized exam quizzes or custom explainer videos using analogies a user understands (e.g., thermodynamics explained via cycling. This, Woodward emphasized, is &quot;where we’re headed with Gemini,&quot; enabled by the <b>Gemini 2.5 Pro</b> model allowing users to &quot;think things into existence.&quot; </p><p>The new developer tools unveiled at I/O are building blocks. <b>Gemini 2.5 Pro</b> with &quot;Deep Think&quot; and the hyper-efficient <b>2.5 Flash</b> (now <a href="https://venturebeat.com/ai/inside-google-ai-leap-gemini-2-5-thinks-deeper-speaks-smarter-codes-faster/">with native audio and URL context grounding from Gemini API</a>) form the core intelligence. Google also quietly previewed <b>Gemini Diffusion</b>, signalling its willingness to move beyond pure Transformer stacks when that yields better efficiency or latency. Google is stuffing these capabilities into a crowded toolkit: AI Studio and Firebase Studio are core starting points for developers, while Vertex AI remains the enterprise on-ramp.</p><h2>The strategic stakes: defending search, courting developers amid an AI arms race</h2><p>This colossal undertaking is driven by Google’s massive R&amp;D capabilities <!-- -->and strategic necessity. In the enterprise software landscape, Microsoft has a formidable hold, a Fortune 500 Chief AI Officer told VentureBeat, reassuring customers with its full commitment to tooling <b>Copilot</b>. The executive requested anonymity because commenting on the intense competition between the AI cloud providers is sensitive<!-- -->. The executive said that Microsoft&#x27;s dominance in Office 365 productivity applications will be exceptionally hard to dislodge through direct feature-for-feature competition.</p><p>Google&#x27;s path to potential leadership – its &quot;end-run&quot; around Microsoft’s enterprise hold – lies in redefining the game with a fundamentally superior, AI-native interaction paradigm. If Google delivers a truly &quot;universal AI assistant&quot; powered by a comprehensive world model, it could become the new indispensable layer – the effective operating system – for how users and businesses interact with technology. As Pichai mused with podcaster David Friedberg shortly before I/O, that means awareness of physical surroundings. And so AR glasses, Pichai said, &quot;<a href="https://www.youtube.com/watch?v=ReGC2GtWFp4">maybe that&#x27;s the next leap…that’s what’s exciting for me</a>.”</p><p>But this AI offensive is a race against multiple clocks. First, the $200 billion search-ads engine that funds Google must be protected even as it is reinvented. The U.S. Department of Justice’s <a href="https://www.justice.gov/opa/pr/department-justice-prevails-landmark-antitrust-case-against-google">monopolization ruling still hangs over Google</a>—divestiture of Chrome has been floated as the leading remedy. And in Europe, the Digital Markets Act and emerging copyright-liability lawsuits could hem in how freely Gemini crawls or displays the open web.</p><p>Finally, execution speed matters. Google has been criticized for moving slowly in past years. However, over the past 12 months, it became clear Google had been working patiently on multiple fronts, and that it has <a href="https://venturebeat.com/ai/from-catch-up-to-catch-us-how-google-quietly-took-the-lead-in-enterprise-ai/">paid off with faster growth than rivals</a>. The challenge of successfully navigating this AI transition at massive scale is immense, as evidenced by the recent <a href="https://www.bloomberg.com/news/features/2025-05-18/how-apple-intelligence-and-siri-ai-went-so-wrong">Bloomberg report</a> detailing how even a tech titan like Apple is grappling with significant setbacks and internal reorganizations in its AI initiatives. This industry-wide difficulty underscores the high stakes for all players. While Pichai lacks the showmanship of some rivals, the long list of enterprise customer testimonials Google paraded at its Cloud Next event last month – about actual AI deployments – underscores a leader who lets sustained product cadence and enterprise wins speak for themselves. </p><p>At the same time, focused competitors advance. Microsoft’s enterprise march continues. Its Build conference showcased <b>Microsoft 365 Copilot</b> as the &quot;UI for AI,&quot; <b>Azure AI Foundry</b> as a &quot;production line for intelligence,&quot; and <b>Copilot Studio</b> for sophisticated agent-building, with impressive low-code workflow demos (<a href="https://www.youtube.com/watch?v=ceV3RsG946s">Microsoft Build Keynote, Miti Joshi at 22:52, Kadesha Kerr at 51:26</a>). Nadella’s &quot;open agentic web&quot; vision (<a href="https://venturebeat.com/ai/the-battle-to-ai-enable-the-web-nlweb-and-what-enterprises-need-to-know/">NLWeb, MCP</a>) offers businesses a pragmatic AI adoption path, allowing <a href="https://venturebeat.com/ai/microsoft-just-taught-its-ai-agents-to-talk-to-each-other-and-it-could-transform-how-we-work/">selective integration of AI tech</a> – whether it be Google’s or another competitor’s – within a Microsoft-centric framework.</p><p>OpenAI, meanwhile, is way ahead of the consumer reach of its ChatGPT product, with recent references by the company to having 600 million monthly users, and 800 million weekly users. This compares to the Gemini app’s 400 million monthly users. And in December, OpenAI launched a full-blown search offering, and is reportedly planning an ad offering – posing what could be an existential threat to Google’s search model. Beyond making leading models, OpenAI is making a provocative vertical play with its <a href="https://www.nytimes.com/2025/05/21/technology/openai-jony-ive-deal.html">reported $6.5 billion acquisition of Jony Ive&#x27;s IO</a>, pledging to move &quot;beyond these legacy products&quot; – and hinting that it was launching a hardware product that would attempt to disrupt AI just like the iPhone disrupted mobile. While any of this may potentially disrupt Google&#x27;s next-gen personal computing ambitions, it’s also true that OpenAI&#x27;s ability to build a deep moat like Apple did with the iPhone may be limited in an AI era increasingly defined by open protocols (like MCP) and easier model interchangeability.</p><p>Internally, Google navigates its vast ecosystem. As Jeanine Banks, Google&#x27;s VP of Developer X, told VentureBeat serving Google&#x27;s diverse global developer community means &quot;it&#x27;s not a one size fits all,&quot; leading to a rich but sometimes complex array of tools – AI Studio, Vertex AI, Firebase Studio, numerous APIs.</p><p>Meanwhile, Amazon is pressing from another flank: Bedrock already hosts Anthropic, Meta, Mistral and Cohere models, giving AWS customers a pragmatic, multi-model default.</p><h2>For enterprise decision-makers: navigating Google&#x27;s &#x27;world model&#x27; future</h2><p>Google’s audacious bid to build the foundational intelligence for the AI age presents enterprise leaders with compelling opportunities and critical considerations:</p><ol><li><p><b>Move now or retrofit later: </b>Falling a release cycle behind could force costly rewrites when assistant-first interfaces become default.</p></li><li><p><b>Tap into revolutionary potential:</b> For organizations seeking to embrace the most powerful AI, leveraging Google&#x27;s &quot;world model&quot; research, multimodal capabilities (like Veo 3 and Imagen 4 showcased by Woodward at I/O), and the <a href="https://venturebeat.com/ai/at-google-i-o-sergey-brin-makes-surprise-appearance-and-declares-google-will-build-the-first-agi/">AGI trajectory promised by Google</a> offers a path to potentially significant innovation.</p></li><li><p><b>Prepare for a new interaction paradigm:</b> Success for Google&#x27;s &quot;universal assistant&quot; would mean a primary new interface for services and data. Enterprises should strategize for integration via APIs and agentic frameworks for context-aware delivery.</p></li><li><p><b>Factor in the long game (and its risks):</b> Aligning with Google&#x27;s vision is a long-term commitment. The full &quot;world model&quot; and AGI are potentially distant horizons. Decision-makers must balance this with immediate needs and platform complexities.</p></li><li><p><b>Contrast with focused alternatives:</b> Pragmatic solutions from Microsoft offer tangible enterprise productivity now. Disruptive hardware-AI from OpenAI/IO presents another distinct path. A diversified strategy, leveraging the best of each, often makes sense, especially with the increasingly open agentic web allowing for such flexibility.</p></li></ol><p>These complex choices and real-world AI adoption strategies will be central to discussions at <a href="https://www.vbtransform.com/?utm_source=vb&amp;utm_medium=article&amp;utm_content=transformpromo&amp;utm_campaign=vbarticles"><b>VentureBeat&#x27;s Transform 2025</b></a> next month. The leading independent event brings enterprise technical decision-makers together with leaders from pioneering companies to share firsthand experiences on platform choices – Google, Microsoft, and beyond – and navigating AI deployment, all curated by the VentureBeat editorial team. With limited seating, early registration is encouraged.</p><h2>Google&#x27;s defining offensive: shaping the future or strategic overreach?</h2><p>Google&#x27;s I/O spectacle was a strong statement: Google signalled that it intends to architect and operate the foundational intelligence of the AI-driven future. Its pursuit of a &quot;world model&quot; and its AGI ambitions aim to redefine computing, outflank competitors, and secure its dominance. The audacity is compelling; the technological promise is immense.</p><p>The big question is execution and timing. Can Google innovate and integrate its vast technologies into a cohesive, compelling experience faster than rivals solidify their positions? Can it do so while transforming search and navigating regulatory challenges? And can it do so while focused so broadly on both consumers <i>and</i> businesses—an agenda that is arguably much broader than that of its key competitors?</p><p>The next few years will be pivotal. If Google delivers on its &quot;world model&quot; vision, it may usher in an era of personalized, ambient intelligence, effectively becoming the new operational layer for our digital lives. If not, its grand ambition could be a cautionary tale of a giant reaching for everything, only to find the future defined by others who aimed more specifically, more quickly. </p>]]></description>
            <author>mmarshall@venturebeat.com (Matt Marshall)</author>
            <category>Automation</category>
            <category>Business</category>
            <category>Enterprise Analytics</category>
            <category>Infrastructure</category>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/3eEWgNPajEcUEUosGYjDEX/f0a6ba6b52f85623b4bafc6671141b75/Gemini_Generated_Image_q6msrxq6msrxq6ms_9f93e4.jpeg?w=300&amp;q=30" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[We need to talk about the F word ('friction' in enterprise, that is)]]></title>
            <link>https://venturebeat.com/datadecisionmakers/we-need-to-talk-about-the-f-word-friction-in-enterprise-that-is</link>
            <guid isPermaLink="false">wp-2997313</guid>
            <pubDate>Sat, 22 Feb 2025 21:01:00 GMT</pubDate>
            <description><![CDATA[<p>Today, everything is frictionless. Whether you’re <a href="https://www.forbes.com/sites/sabbirrangwala/2024/10/09/transforming-the-parking-experiencestressful-to-frictionless/">parking your car</a>, <a href="https://www.forbes.com/sites/michaelgale/2022/09/20/the-frictionless-house-sale-and-purchase-of-2030-seriously-at-long-last/">selling your home</a>, <a href="https://www.nrn.com/quick-service/white-castle-test-ai-powered-drive-thru-system-license-plate-recognition">hitting the drive-thru</a> or drying your hair, there’s a company promising to save you time and eliminate any hassle you might encounter. Frictionlessness is the final form of the American cult of convenience —the culmination of an ongoing effort to satisfy consumers’ desire for faster, easier and more complete satisfaction. </p><p>In the digital era, frictionlessness is also a growth driver: The easier it becomes to use a product or service, the faster companies can grow their user base and extract value. Frictionless product design, global consent tools, digital wallets and unified logins powered by giants like Facebook and Google let companies expand and sell without making users complete forms, wade through privacy boilerplate or even enter payment details.</p><p>It’s easy to see why investors love frictionless. (Try finding a Bay Area pitch deck that <i>doesn’t</i> promise to eliminate friction.) But our fetishization of frictionlessness comes at a price.</p><h2>A slippery slope</h2><p>Making things easy<i> </i>was the <a href="https://venturebeat.com/ai/like-it-or-not-ai-is-learning-how-to-influence-you/">digital era’s</a> founding promise. In the mid-1990s, I helped build web products that made stock quotes available at the click of a mouse-button. (It beats running inky fingers down newsprint stock-prices in the back of the business pages.) Other dot-com pioneers made it effortless to listen to music, shop, read news, forge relationships and more. </p><p>Since then, mobile devices, unified logins, location services and more have yielded countless “automagical” solutions to our problems. Personalized feeds deliver content without <a href="https://venturebeat.com/ai/why-context-aware-ai-agents-will-give-us-superpowers-in-2025/">our raising a finger</a>. Smart devices — speakers, cars, sunglasses, refrigerators — weave digital convenience through every aspect of our lives. Now, AI is eliminating more friction: No need to write emails, read messages or think particularly hard about anything at all. </p><p>However, eliminating friction is no guarantee of success. WeWork <a href="https://theconversation.com/wework-approached-physical-space-as-if-it-were-virtual-which-led-to-the-companys-downfall-217909">spectacularly failed</a> to build a frictionless real-estate empire. Amazon spent vast sums on <a href="https://www.cnn.com/2024/04/03/business/amazons-self-checkout-technology-grocery-flop/index.html">frictionless brick-and-mortar retail</a>. Pre-Trump Zuck <a href="https://www.newyorker.com/magazine/2018/09/17/can-mark-zuckerberg-fix-facebook-before-it-breaks-democracy?mbid=social_facebook&amp;fbclid=IwY2xjawHSvU1leHRuA2FlbQIxMQABHY-7dknM2YAhVbtpD0f_DdVoMQHFHWyCU1fUNjHY4nmTJbeNkcewh3TJjw_aem_Fikd01Y7DcoE7OMWqY1ZrQ">flip-flopped on friction</a> as Facebook sought to goose engagement while countering online trolls.</p><p>Moreover, eliminating friction has serious societal consequences. Friction helps us pause, think and make better choices. When Robinhood made trading effortless, countless Americans <a href="https://www.wsj.com/finance/stocks/stock-market-trading-apps-addiction-afecb07a">wound up addicted</a> to the rush. <a href="https://www.theguardian.com/us-news/article/2024/sep/04/2024-nfl-season-gambling-sports-betting">Easy online betting</a> drove bankruptcy rates up 30%. More friction might have kept Crowdstrike from <a href="https://www.nbcnews.com/tech/tech-news/microsoft-blue-screen-of-death-global-outage-rcna162674">bricking the world’s computers</a>; helped <a href="https://www.theatlantic.com/newsletters/archive/2023/10/social-media-moderation-extremism-israel-hamas/675706/">rein in extremists</a> on social media; or <a href="https://www.nytimes.com/2024/12/17/business/pharmacy-benefit-managers-opioids.html">slowed the spread</a> of opioids. </p><p>Even for individuals, effortlessness isn’t always a net win.  Eliminating friction can mean surrendering agency. Algorithms deliver premasticated content chosen <i>for </i>us, not <i>by </i>us; online retailers select what we “discover” to control what we buy. Frictionlessness infantilizes us; we’re left floating along, like the <a href="https://www.digitaltrends.com/movies/wall-e-pixar-movie-15th-anniversary-retrospective/#dt-heading-the-danger-of-relying-too-much-on-ai">hoverchair-bound humans</a> of <i>WALL-E</i>, passively ingesting whatever we’re served.</p><h2>Strategic inconvenience</h2><p>While it may be easier to get rich shoveling pablum into the receptive mouths of acquiescent consumers, in the long run, companies that treat customers like intelligent, empowered, self-aware people will do better than those that simply rely on their docility.</p><p>Steve Jobs understood the seductions of effortlessness — hence his promise that Apple products would “<a href="https://techcrunch.com/2011/06/08/apple-icloud-google-cloud/">just work</a>.” But at the same time, he understood the power of friction. When asked about the right way to collect customers’ data, he <a href="https://www.youtube.com/watch?v=i5f8bqYYwps&amp;t=4332s">famously said</a>: “Ask them! Ask them every time. Make them tell you to stop asking them if they get tired of your asking them.” Jobs advocated a Goldilocks approach: Just the right amount of friction to create products that delight and do right by your users.</p><p>By striking the right balance, companies can use friction to their advantage. Friction, after all, is another word for <a href="https://venturebeat.com/ai/listen-to-your-technology-users-they-have-led-to-the-most-disruptive-innovations-in-history/">feedback</a> — so products that become completely frictionless stop responding to users’ needs. The pursuit of frictionlessness can launch you skywards, but over time you’ll struggle to course-correct. Eventually, gravity will drag you back to earth.</p><p>This isn’t hypothetical: Research shows that friction makes many systems — including businesses — smarter and more resilient. A bit of strategic inconvenience can improve market performance, with investors <a href="https://www.firn.org.au/wp-content/uploads/2021/10/Getting-burned-by-frictionless-financial-markets_revised.pdf?ref=growinginpublic.investorhub.com">making smarter decisions</a> when they’re forced to slow down and think about trades. The IEX stock exchange feeds trades through a “magic shoebox” containing miles of coiled fiber optic cables, <a href="https://live-cltc.pantheon.berkeley.edu/2022/09/20/desirable-inefficiency-and-data-scraping-the-role-of-friction-in-privacy/">creating a 350-milisecond delay</a> to blunt the impact of high-frequency trading, making markets fairer for everyone.</p><p>Cybersecurity teams sometimes <a href="https://www.vice.com/en/article/the-internet-needs-more-friction/">require human approval</a> before software auto-installs across their intranet — slowing people down, while making it harder for bad actors to hijack the network. Data privacy rules like the GDPR, meanwhile, add friction in the form of consent and privacy notices — and while too much is counterproductive, a balanced approach leaves consumers better-informed and with more control over their data. </p><h2>Start talking about friction</h2><p>What does this mean for today’s founders? Clearly, we won’t jettison frictionless technologies anytime soon — nor should we. What we should be doing, though, is asking how we can <a href="https://ondiscourse.com/good-brands-will-integrate-more-friction/">right-size the friction</a> in our products — or even turn friction into a business opportunity.</p><p>For technologists, that means asking: What problems are you solving by eliminating friction — and what problems might you create, now or in future, by doing so? Every design choice brings tradeoffs, but balancing risks and rewards to design for the right level of friction enables both rapid growth and long-term sustainability. </p><p>Such an approach could also make it easier to have grown-up conversations about the need to regulate AI and other emerging technologies. <a href="https://venturebeat.com/ai/trumps-ai-czar-and-the-wild-west-of-ai-regulation-strategies-for-enterprises-to-navigate-the-chaos/">Regulations</a> always add<i> </i>friction — but once we accept that some friction can be valuable, we can work collaboratively with policymakers to find the right level of friction to support innovation while protecting and respecting consumers.</p><p>The bottom line? Frictionlessness as the Holy Grail for digital products and services has backfired. It’s time for a more nuanced approach — and that starts with accepting that for tech founders and investors, “friction” shouldn’t be a four-letter word. Only by bringing friction back into the conversation can we responsibly build out new technologies with growth potential, but also real staying power.</p><p><i>Jennie Baird is chair of </i><a href="https://www.ethicaltechproject.com/"><i>The Ethical Tech Project</i></a><i>.</i></p>]]></description>
            <category>DataDecisionMakers</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/6rQF5DkK5q5TpEZ39QcAzX/642a6100fcb0e75ba85672d9f598bb11/upscalemedia-transformed_79666a.jpeg?w=300&amp;q=30" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Realtime AI video analysis app Lloyd will offer developer kit after passing 50,000 users]]></title>
            <link>https://venturebeat.com/technology/realtime-ai-video-analysis-app-lloyd-will-offer-developer-kit-after-passing-50000-users</link>
            <guid isPermaLink="false">wp-2986652</guid>
            <pubDate>Tue, 10 Dec 2024 20:06:15 GMT</pubDate>
            <description><![CDATA[<p><i>Disclaimer: EndlessAI previously published a </i><a href="https://venturebeat.com/business/endlessai-unveils-lloyd-the-worlds-first-video-ai-assistant-for-smartphones/"><i>contributor piece on VentureBeat</i></a><i> announcing the launch of Lloyd in early October.</i></p><p>Four-year-old AI startup <a href="https://www.endlessai.com/">EndlessAI</a> isn&#x27;t a household name — yet.</p><p>But its founders and leaders believe they have a bonafide hit on their hands: Their freemium <a href="https://apps.apple.com/us/app/lloyd-the-ai-that-can-see/id588199307">iOS app Lloyd</a>, which uses proprietary video streaming and encoding tech to feed the user&#x27;s live video view to underlying AI models including OpenAI&#x27;s GPT-4o for help with a wide variety of tasks, from bicycle repair to telling bedtime stories, has achieved 50,000+ users <a href="https://www.linkedin.com/posts/roiginat_ai-videoai-innovation-activity-7239035001978155008-3J7E?utm_source=share&amp;utm_medium=member_desktop">three months after a stealth launch</a>.</p><p>Forty-one percent of those users engage with the app daily, according to data provided to VentureBeat by EndlessAI.</p><div></div><p>While it&#x27;s no ChatGPT — which became the <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/">fastest product in history to cross the 100 million user mark in January 2023</a>, just two months after launch — it is nonetheless encouraging enough to EndlessAI CEO Roi Ginat and executive chairman Thomas Pompidou, who told VentureBeat in a recent video call interview they planned to open their platform up to third-party developers in early 2025 and launch a consumer-facing Android app in January.</p><p>Also, EndlessAI has already begun upgrading Lloyd with what it calls &quot;Powers,&quot; or as Pompidou describes them, &quot;basically fine-tuned large language models (LLMs) that provide deep dive to consumer on specific use case[s].&quot;</p><p>For example, the first Lloyd Power live now in the app is &quot;Chef,&quot; which provides a real-time, entirely AI coach for you that watches you as you cook (if you point your smartphone camera at your stove top or cooking area) and provides step-by-step guidance.</p><p>Another Lloyd Power planned to launch shortly is Tour Guide, which allows users to hold up their phone and see real-time contextual information about their surroundings. By capturing a video of a location, it identifies points of interest, provides relevant details, and can even recommend nearby attractions or activities.</p><h2>Making realtime video analysis accessible at scale</h2><p>While current LLMs have struggled to process live video efficiently due to high computational costs. EndlessAI’s technology overcomes this limitation, reducing the cost of video analysis by over 99%. </p><p>Pompidou underscored the app’s broader mission: “Our mission is to scale AI to the real world. The real world is visual and live, and today’s large language models, as they’re architected, face challenges in analyzing video accurately, at scale, and cost-effectively. That’s what we make possible.”</p><p>Enabling real-time video analysis allows users to interact with their environment in novel ways, from diagnosing mechanical issues to creating personalized bedtime stories.</p><p>Lloyd’s core differentiation lies in its ability to process video data through LLMs at a fraction of the cost typically associated with such tasks. Traditional LLM architectures are not optimized for video, making real-time video analysis prohibitively expensive and slow.</p><p>“Analyzing video with ChatGPT, assuming it could, would cost over $300 per hour,&quot; Pompidou said. &quot;With Lloyd, we deliver the same level of accuracy for just tens of cents per hour.”</p><p>This cost-efficiency is achieved without sacrificing accuracy, setting Lloyd apart from competitors that rely on reduced frame rates or lower resolutions to cut costs, often at the expense of reliability.</p><p>“Our communication layer is robust in ways other solutions aren’t. It lets developers integrate real-time AI services like speech-to-text, text-to-speech, and video analysis with unmatched reliability and performance.”</p><p>As Pompidou envisions the future, he offered a glimpse into the app’s potential: “Imagine a finely tuned LLM trained on every IKEA instruction manual, guiding customers step by step with video and recognizing errors in real time. It’s just one example of how our technology can transform user experiences.”</p><p>Another big arena that EndlessAI plans to court through Lloyd and its underlying video encoding tech: law enforcement, specifically analysis of police bodycam footage.</p><p>&quot;If there is someone who has a heart attack, it is going to recognize that and provide the officer with instructions on what to do right away,&quot; said Pompidou. </p><h2>Privacy and security</h2><p>Even though Lloyd itself sees exactly whatever you point your smartphone camera at, EndlessAI prioritizes user privacy. </p><p>&quot;Data stays private to [user] accounts, and we only access it for support if users explicitly request assistance,&quot; Ginat said.</p><p>This approach ensures robust safeguards while enabling seamless interactions.</p><p>But as a consequence, EndlessAI isn&#x27;t exactly sure what the most popular uses for Lloyd are among its users. Anecdotally, it says that its surveys and feedback forms have shown interest in food preparation, household repairs, fashion and lifestyle coaching, and more.</p><h2>New developer tools coming in early 2025</h2><p>While Lloyd’s consumer-facing features gain traction, EndlessAI is also building tools to empower developers and enterprises to harness its technology.</p><p>&quot;Our long-term roadmap includes an SDK for developers, starting early next year,&quot; Pompidou said. &quot;It will empower them to create unique visual AI solutions with extreme simplicity.&quot;</p><p>The SDK will allow developers to integrate AI vision capabilities into their own applications. </p><p>“The first offering for developers will be a robust platform for real-time API communication, connecting to OpenAI and other backends,&quot; Ginat told VentureBeat. &quot;Developers can pick and choose which components they want to use, such as audio services or speech-to-text.” </p><p>Applications for these tools span industries, from creating AI-enhanced chat applications to integrating video analysis into production lines and safety monitoring systems. </p><p>EndlessAI aims to offer scalable solutions that adapt to different performance and cost requirements. </p><p>“Our developer tools will allow on-the-fly adjustments — choosing between backend services or lightweight, on-device solutions depending on the use case and cost requirements,” Ginat added.</p><p>By combining robust APIs with an intuitive SDK, EndlessAI envisions a new wave of AI-driven applications that go beyond traditional text or image processing. “We’ll offer developers the ability to integrate various services, including side-processing video, enhancing their sessions with additional capabilities,” Ginat said.</p><h2>Transforming consumer and enterprise AI</h2><p>Lloyd’s ability to leverage existing smartphones — without requiring additional hardware — makes it uniquely accessible.</p><p>By reducing barriers to entry, EndlessAI is redefining what’s possible with AI in daily life and specialized industries alike.</p><p>With its rapid user adoption, versatile applications, and robust roadmap, Lloyd is poised to become a defining innovation in the AI space. </p><p>“Our long-term strategy is to stay complementary to LLMs,” Pompidou said. “Even when models can natively process video, we aim to remain the efficiency layer that makes these applications viable and cost-effective.”</p>]]></description>
            <author>carl.franzen@venturebeat.com (Carl Franzen)</author>
            <category>Technology</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/2uz67XXYCWQJqMkzyB7axF/9c24ffa39aa0475a77ff33d8fb0f7b73/cfr0z3n_vector_art_line_art_flat_illustration_pop_art_corporate_80000997-04b4-4e6d-a87b-c76b23b714a7.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Enter the 'Whisperverse': How AI voice agents will guide us through our days]]></title>
            <link>https://venturebeat.com/technology/enter-the-whisperverse-how-ai-voice-agents-will-guide-us-through-our-days</link>
            <guid isPermaLink="false">wp-2981461</guid>
            <pubDate>Sun, 03 Nov 2024 20:15:00 GMT</pubDate>
            <description><![CDATA[<p>A common criticism of big tech is that their platforms treat users as little more than glassy eyeballs to be monetized with targeted ads. This will soon change, but not because tech platforms are moving away from aggressively targeting users. Instead, our ears are about to become the most efficient channel for hammering us with <a href="https://arxiv.org/abs/2306.11748">AI-powered influence</a> that is responsive to the world around us. Welcome to <i>the Whisperverse.</i>   </p><p>Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by <a href="https://venturebeat.com/ai/the-profound-danger-of-conversational-ai/">whispering guidance</a> to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. These will feel like <a href="https://venturebeat.com/datadecisionmakers/augmented-reality-superhuman-abilities-and-the-future-of-medicine/">superpowers</a>.</p><h2>The &#x27;Whisperverse&#x27; will require highly advanced technology</h2><p>Of course, you won’t be the only one “<a href="https://venturebeat.com/datadecisionmakers/future-augmented-reality-will-inherit-the-earth/">augmented</a>” with context-aware AI guidance. Everyone else will have similar abilities. This will create an arms race among the public to embrace the latest AI-powered enhancements. It will not feel like a choice, because not having these capabilities will put you at a cognitive disadvantage. This is the future of mobile computing. It will transform the bricks we carry around into body-worn devices that see and hear our surroundings and covertly offer useful information and <a href="https://patents.google.com/patent/US7577522B2/en">friendly reminders</a> at every turn.</p><p>Most of these devices will be deployed as <a href="https://bigthink.com/the-future/metaverse-augmented-virtual-reality/">AI-powered glasses</a> because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple <a href="https://patents.google.com/patent/US7489979B2">head nod gestures</a> of agreement or rejection, as we naturally do with other people.  </p><p>This future is the result of two technologies maturing and merging into one — AI and augmented reality. Their combination will enable AI assistants to ride shotgun in our lives, observing our world and giving us advice that is so useful, we will quickly feel like we can’t live without it. Of course there are <a href="https://www.privacylost.org/">serious privacy concerns</a>, not to mention the risk of <a href="https://arxiv.org/abs/2306.11748">AI-powered persuasion</a> and manipulation, but what choice will we have? When big tech starts selling <a href="https://venturebeat.com/ai/anthropics-agentic-computer-use-is-giving-people-superpowers/">superpowers</a>, to not have these abilities will mean being at a disadvantage socially, professionally, intellectually and economically.</p><h2>&#x27;Augmented mentality&#x27; changing our lives</h2><p>I’ve been writing about our <a href="https://venturebeat.com/ai/welcome-to-the-augmented-future-watch-it-bring-you-to-your-knees/">augmented future</a> for more than 30 years, first <a href="https://spectrum.ieee.org/history-of-augmented-reality">as a researcher</a> at Stanford, NASA and the U.S. Air Force, and then as a professor and entrepreneur. When I first started working in the field we now call “augmented reality,” that phrase didn’t exist, so I described the concept as “<a href="https://www.semanticscholar.org/paper/The-Use-of-Virtual-Fixtures-as-Perceptual-Overlays-Rosenberg/2aa1b8cf3ea4ec1cbd7db63b886aa5b07175bb21">perceptual overlays</a>” and showed for the first time that AR could significantly <a href="https://www.youtube.com/watch?v=vTG7YkZ8gv8">enhance human abilities</a>. These days, there is a similar lack of words to describe the AI-powered entities that will sit on our shoulders and coach us through our day. I often refer to this emerging branch of computing as “<a href="https://venturebeat.com/ai/2024-will-be-the-year-of-augmented-mentality/">augmented mentality</a>” because it will change how we think, feel and act.   </p><p>Whatever we end up calling this new field, it is coming soon and it will <a href="https://bigthink.com/the-future/metaverse-dystopia/">mediate our lives</a>, assisting us at work, at school or even when grabbing a late-night snack in the privacy of our own kitchen. If you are skeptical, you’ve not been tracking the massive investment and <a href="https://venturebeat.com/games/ray-ban-meta-smart-glasses-make-ai-convos-even-more-convenient-review/">rapid progress made by Meta</a> on this front and the arms race they are stoking with Apple, Google, Samsung and other major players in the mobile market. It is increasingly clear that by 2027, this will become a major battleground in the mobile device industry.</p><p>The first of these devices is already on the market — the AI-powered Ray-Bans from Meta. Although currently a niche product, I believe it is the single <a href="https://venturebeat.com/ai/2024-will-be-the-year-of-augmented-mentality/">most important mobile device</a> being sold today. That’s because it follows the new paradigm that will soon define mobile computing: Context aware guidance. To enable this, the Meta Ray-Bans have onboard cameras and mics that feed a powerful AI engine and pumps verbal guidance into your ears. At Meta Connect in September, the company showcased new <a href="https://venturebeat.com/games/ray-ban-meta-smart-glasses-make-ai-convos-even-more-convenient-review/">consumer-focused features</a> for these glasses, such as helping users find their parked cars, translating languages in real-time and naturally answering questions about things you see in front of you.</p><h2>&#x27;Cute&#x27; creatures rather than &#x27;creepy&#x27; ones</h2><p>Of course, the Meta Ray-Bans are just a first step. The next step is to visually enhance your experience as you navigate your world. Also in September, Meta unveiled their prototype Orion glasses that deliver high quality visual content in a form factor that is finally reasonable to wear in public. The Orion device is not planned for commercial deployment, but it paves the way for consumer versions to follow.</p><p>So, where is this all headed? By the early 2030’s, I predict the convergence of AI and augmented reality will be sufficiently refined that AI assistants will appear as photorealistic avatars that are embodied within our field of view. No, I don’t believe they will be displayed as human-sized virtual assistants who follow us around all day.  That would <a href="https://bigthink.com/the-present/danger-conversational-ai/">be creepy</a>. Instead, I predict they will be rendered as cute little creatures that fly out in front of us, guiding us and informing us within our surroundings.</p><p>Back in 2020, I wrote a short story (Carbon Dating) for a <a href="https://www.amazon.com/Spring-Into-SciFi-Jason-Lairamore/dp/1952796024">sci-fi anthology</a> in which I refer to these AI assistants as Electronic Life Facilitators, or ELFs for short. I like thinking of these AI-powered entities as elves because that is what they will become in our lives — <a href="https://techcrunch.com/2022/01/12/the-metaverse-will-be-filled-with-elves/">helpful little creatures</a> that prompt you with the exact cargo capacity of a railcar when you just can’t remember in an important meeting, or takes the shape of a flying fairy that guides you through Costco to find the items on your shopping list as efficiently as possible. These features will not just be helpful, they will make our lives seem magical.</p><p><sub>Computer scientist Louis Rosenberg with ELF concept (Carbon Dating, 2021)</sub></p><p>On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of <a href="https://venturebeat.com/ai/the-profound-danger-of-conversational-ai/">targeted influence</a>. And when this is coupled with the ability to visually modify the world around you, AI/AR powered glasses could enable the most powerful tools of <a href="https://venturebeat.com/virtual/mind-control-the-metaverse-may-be-the-ultimate-tool-of-persuasion/">persuasion and manipulation</a> ever created. For these reasons, I sincerely hope that industry leaders do not adopt an advertising business model when commercializing these AI-powered glasses. I also hope they consider how these products will shake-up social dynamics, as they can change how people interact face-to-face in damaging ways (the short film <a href="https://www.youtube.com/watch?v=IsE_Pas2OQU">Privacy Lost</a> shows examples).</p><p>For three decades I’ve researched how AR and AI can enhance human abilities in positive ways. That said, the last thing I want is for giant corporations to battle for marketing dollars based on how efficiently their AI assistants can <a href="https://www.researchgate.net/publication/369355910_The_Manipulation_Problem_Conversational_AI_as_a_Threat_to_Epistemic_Agency">talk me into buying things I don’t need</a> or believing things that aren&#x27;t true. To enable the magical benefits while protecting our <a href="https://www.iiis.org/DOI2023/HA408FU/">privacy and agency</a>, I recommend that regulators quickly focus on this emerging market. Their goal should be to define the playing field so that big tech can compete aggressively on how magical they make your life, not how effectively they can influence it.</p><p><i>Louis Rosenberg is a computer scientist in the fields of AR and AI. He is known for founding Immersion Corp, Outland Research and Unanimous AI</i>. </p>]]></description>
            <author>lb_rosenberg@yahoo.com (Louis Rosenberg, Unanimous A.I.)</author>
            <category>DataDecisionMakers</category>
            <category>Technology</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/7sbTgCiNrmCKEMFc2j1Dm3/1793b3a7fdcf55217f6896bf2bc265d4/image1_faef4f.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Taking the lead on solving evolving workplace needs]]></title>
            <link>https://venturebeat.com/technology/taking-the-lead-on-solving-evolving-workplace-needs</link>
            <guid isPermaLink="false">wp-2977795</guid>
            <pubDate>Thu, 10 Oct 2024 13:40:00 GMT</pubDate>
            <description><![CDATA[<p><i>Presented by Zoom</i></p><hr/><p>Across the world, today’s workplace is actively being shaped by both leadership and employee preferences, and is aimed at attracting and retaining talent. As a result, a <a href="https://news.zoom.us/zoom-survey-reveals-hybrid-work-reigns-supreme-and-delivers-unexpected-value-to-global-organizations/">Zoom study found</a> 64% of leaders report that they’re rocking a hybrid workforce, while a full 95% say they’ve made their workplace more flexible in the past two years. The good news is that human connection isn’t limited to the butts-in-seats model -- hybrid workers reported feeling connected to their companies and coworkers more often than their in-office peers. Plus, generative AI is proving to be a powerful tool that doesn’t just significantly amp up productivity, but helps facilitate relationships between coworkers.</p><p>How are business leaders approaching the unique challenges that hybrid work brings, and how are they setting themselves up for the future of work, even as the workplace and GenAI continue to evolve? VentureBeat sat down with Zoom leaders Gary Sorrentino, global CIO, and Smita Hashim, chief product officer, to discuss the major challenges they’re helping customers and partners tackle in a modern, multigenerational workforce.</p><p><b>VB: What are the current dynamics of the 2024 workplace? Do most companies have it figured out, or are we still muddling through?</b></p><p><b>Sorrentino: </b>We see companies of all different sizes and a convergence of post-pandemic worries. People struggle with what their future work or current work will look like, whether that’s hybrid or in-person. We’re seeing a brand-new generation of very smart, very tech-savvy individuals coming into the workplace. We’ll be about 20% Gen Z by the end of the year and about 90% of them will be hired by millennials. This convergence means they have different ideas about how they can work more productively. We’re supporting so many different models of what hybrid work is, different models of how people are going to work together.</p><p><b>Hashim:</b> We’re definitely navigating uncharted territory. People are discovering different ways to work. Out of the pandemic, there was a desire to be really prescriptive. But now there are a number of ways to work, including hybrid, and people realize things can be more flexible. We still have things to figure out, but we’re making significant strides and the <a href="https://www.zoom.com/en/blog/introducing-zoom-workplace/">tools are getting better</a> at enhancing productivity and collaboration.</p><p><b>VB: The approach to the modern workplace is using AI as a strategy and not just a technology tool. How have you seen that manifesting at Zoom? How are you innovating to prepare your customers to use AI that way?</b></p><p><b>Hashim:</b> Generative AI offers tremendous potential to make work more engaging, and less mundane, enabling us to “work happy.” At Zoom, we’re on a journey to realize this potential through AI Companion. Launched last fall, we initially focused on meeting summarization, composing emails, chats, documents and follow-up tasks to boost efficiency. Now, we’re doubling down on increasing engagement by enhancing meeting summaries and in-meeting questions. We’re incredibly focused on delivering real value to our customers and cutting through the GenAI hype, which is why we’ve made AI Companion available across all paid licenses so that more people can amplify their skills and simplify their workdays.</p><p>The response has been really positive. It’s the quality, and solving repetitive tasks, and doing it well, which people absolutely love. It’s getting behind the hype of GenAI to find the reality — and people get very excited when they see what can be done today.</p><p><b>VB: Good communication can make or break a business. How can leaders invest smartly and consolidate tools without compromising on streamlined communications? How can you consolidate all of these tools?</b></p><p><b>Sorrentino:</b> We have five generations of people in the workforce now. They’re all dealing either peer-to-peer, inside a company, or peer-to-client. Companies today need to look at each one of the different communication channels they’re using, whether it’s chat or voice or even email, and think about being omnichannel. They need to understand their use, understand what demographics use them, and make sure that when they’re connecting, they’re using the right channels.</p><p><b>Hashim:</b> I’m with Gary. I feel like we need to support all of this omnichannel strategy for users to communicate with each other in ways that feel comfortable to them, in ways that feel natural to them. And while I’m a big fan of omnichannel, the reality is, while it connects all of us, it also creates noise. It’s continuous communication coming from all directions. How do we apply technology in order to simplify and help users streamline and be more productive while still keeping their connections?</p><p><b>VB: How can Zoom Workplace help the feeling of disenfranchisement, the freezing out of colleagues working remotely, and the potential loss of team cohesion?</b></p><p><b>Sorrentino:</b> We design products around any kind of human interaction, whether it’s being hybrid, being one-on-one virtual, or being in-person. Zoom Workplace was designed around making people feel like their setup is built for their needs and different communication styles based on the workplace environment. This gives equity to the individual and helps to amplify productivity and connections between coworkers who won&#x27;t feel their workstyle is a deterrent to collaboration.</p><p>During the pandemic, everyone said they felt disenfranchised from their staff. But here’s what we learned over time. When you go by someone’s desk in the physical world, you might see a picture of their son playing baseball. But in the pandemic world, you got to actually talk to their son when they appeared for a few minutes on screen. You got to hear about the baseball game right from him. You got to know that the dining room was blue and that they eat dinner at 5:30.</p><p>A platform like Zoom with Zoom Workplace is making it easier for us to connect, easier for us to be productive, easier for us to work different styles, yet still contribute equally. We designed Intelligent Director to make the in-room meeting experience and the outside-of-the-room experience the same. We have to continue to build products and services and features that will allow humans to feel like there’s equity, and feel productive. Zoom Workplace is looking for where humans are going to touch each other and figuring out, how do I optimize that connection? How do I make it so that that becomes the most productive collaboration on their basis, not on a pre-described basis?</p><p><b>VB: As a leader empowering today’s collaboration and modern work, what is Zoom’s approach to innovation in terms of meeting and staying ahead of the evolving needs of its customers?</b></p><p><b>Hashim:</b> Two ways. First of all, we are very close to our customers. The top value for Zoom is customer care. Listening to our customers, and being responsive to them at what we call “Zoom speed.” That keeps us ahead of innovations. It keeps us moving in a direction that’s useful to our customers. We evolve rapidly. That’s something customers appreciate about us. They say about incumbent platforms, once you buy it, it’s what you have. With us they see that partnership.</p><p>The other aspect is, we also try to be forward-looking and think about what technology we can provide that could be differentiating, that could add a lot of value. Whether it’s noise cancellation or virtual backgrounds, or even now, thinking about generative AI capabilities, taking a federated approach on the back end.</p><p><b>Sorrentino:</b> People go to school on Zoom. Titans of industry run their industries on Zoom. It’s the same product. Having that breadth allows us to see -- how do humans use it who are not tied to an organization? How do some of the largest 350,000-person companies use it? By having that dimension on both sides, it allows us to balance the product. Smita has to create a product that a six-year-old can use at home on a snow day, and a person who runs the largest bank in the world can use to run his or her company. The good part is, we live for that. We’re passionate about that. We want to hear about it.</p><p><b>VB: How does Zoom envision the future of work and collaboration, especially as companies begin to develop road maps for digital transformation? What does Zoom’s innovation road map look like?</b></p><p><b>Hashim:</b> We talked a little bit about innovation and the partnership with customers, but it’s also looking ahead on the technology stack. Like the user experience, how can we make it better? We look not just at one product at a time, but across Zoom Workplace -- users are often using these products together. Internally, within Zoom, we really use our products. That provides us with very candid feedback and it gives us an opportunity to make sure our products are useful. Our products are getting better. We’re able to evolve them in ways that provide a great experience.</p><p><b><i>To learn more about the ways Zoom Workplace is making hybrid organizations equitable, forging connections, ramping up productivity and more, explore the Zoom Workplace </i></b><a href="https://click.zoom.us/Workplace-Solution-Guide"><b><i>solution guide</i></b></a><b><i>.</i></b></p><hr/><p>Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact <a href="mailto:sales@venturebeat.com">sales@venturebeat.com</a></p>]]></description>
            <category>Automation</category>
            <category>Technology</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/6L1bzSBNNs572Qc2Oshgal/355238283e6ee57a1f65efc41277062f/GettyImages-1382266116.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[How IT leaders can spearhead the charge to transform education]]></title>
            <link>https://venturebeat.com/business/how-it-leaders-can-spearhead-the-charge-to-transform-education</link>
            <guid isPermaLink="false">wp-2976860</guid>
            <pubDate>Wed, 02 Oct 2024 13:40:00 GMT</pubDate>
            <description><![CDATA[<p><i>Presented by MSI</i></p><hr/><p>If a primary goal of education is to prepare kids for the future, IT leaders play a more pivotal role than ever. Technology has profoundly impacted work in every industry -- and it’s opened up vast new possibilities in new fields, from positions across STEM industries and AI, to esports and beyond. It’s also transformed how students engage with learning, skill development and high-level problem-solving and critical thinking.</p><p>“Exposing students to computer science and high-end technology is not only useful for the future as they inevitably use it in their careers, but it changes their relationship to school,” says Mat Holley, esports program manager at MSI. “When they’re more engaged, they have better attendance. They have better grades. They’re more prepared for college and the job market. The enthusiasm is remarkable.”</p><p>School boards are leading the charge for these initiatives, but they can’t do it on their own. They must partner with IT leaders in their district, education specialists and technology industry professionals to deliver these learning experiences, and the challenge is to ensure that these programs are cost-effective, with technology, expertise and activities that are future-proof.</p><h2>How technology is transforming the learning experience</h2><p>To support these initiatives, the choice of hardware and software becomes critical. Holley points to the extracurricular club in the charter school district in Chula Vista, San Diego he worked with to help develop and outfit new technology learning initiatives. Students there work on video design, broadcasting, AI and music creation using Vector GP and Raider GE series laptops from MSI, integrating graphics hardware from Nvidia and processing power from Intel. And this high-end gaming hardware and software supports what’s become the largest high school-run esports program in the U.S., the Kern High School District Esports League.</p><p>“I’ve worked with schools that are far along their journey and ready to level up their hardware, to keep pace with how the kids are working and learning, and I’ve also helped districts build the programs from the ground up, from the right hardware to student outreach,” Holley explains. “And though much of this is uncharted territory, the momentum is building, sometimes through word of mouth.”</p><h2>The surprising benefits of esports</h2><p>Educators are sharing knowledge, sparking interest and collaborating with their peers, working toward developing a curriculum standard and blueprint for the hardware and software specifications that can support those programs.</p><p>Though it’s initially surprised many educators and leaders that esports can have such a profound effect on kids -- especially the ones who often feel excluded from other sports -- the number of esports programs is growing. Not only are there tremendous educational and social development benefits for the students that participate, esports also attracts kids who have never joined an extracurricular club: the girls who have felt left out in science and math classes, the BIPOC students who deserve bigger opportunities. The clubs raise their confidence in their own abilities, and more often than not, these students go on to study computer science or some other linked technology career.</p><p>“There is no barrier to entry to be a gamer, and this goes for computer science at large,” Holley says. “You don’t even have to be a gamer to enter these clubs. More and more, esports is plugged into all the various technology clubs like design, broadcasting and journalism, and formerly disenfranchised kids are finding their calling through these clubs in an unprecedented way.”</p><h2>Building the learning experience from the ground up</h2><p>Of course, there continue to be challenges for school districts developing these programs, and many of them come down to major budget constraints. There are also the difficulties that come with ensuring security is solid, that new technology is integrated into existing networks, and moving the environment from on-prem to the cloud. MSI collaborates with educational institutions to ensure that they’re not only hitting the district’s hardware specs, but new hardware will be integrated seamlessly.</p><p>“As we saw more esports integrated into schools, we worked with schools to meet the specifications of their price points, their warranty needs, which are typically longer than a retail warranty,” Holley says. “We wanted to also make sure that these were machines that the students got excited to play on, that sophisticated esports titles were supported. As we started to work with more schools closely, we integrated products from our professional line to improve the student experience and give them access to even more tech areas to explore.”</p><p>Educational IT leaders rejoice: adding computer labs like these is easier than ever. As computing advances, the size of the hardware continues to shrink, making student computers lightweight and easy for IT teams to deploy. IT leaders should also look for hardware that’s easy to integrate, especially from a security point of view -- however, most districts are working with legacy hardware environments. </p><p>“As you build a technology center for students, you have to consider whether existing hardware will play with the new, and whether it will move to the cloud securely,” Holley says. “But as long as we can integrate security standards like content filters, custom imaging and Autopilot deployment, it’s much easier to deploy at scale in almost any environment. We try to build directly in tandem with district-wide IT departments, so they can tell us what they need and what their road map looks like. Then from the manufacturer side, we’re able to make sure that we all play along in the years to come.”</p><p>Another major consideration is product life cycles, which are incredibly short in the consumer world. IT leaders should work with a partner that offers dedicated hardware for education, with life cycles long enough to mesh with the fairly long bidding and buying timeline for education purchases.</p><p>And of course, as cloud computing becomes the standard, it’s important to stay abreast of hardware and software changes and evolving risk scenarios. That means research, testing and working with your supplier to keep informed about the newest hardware and software advancements and when it’s time to upgrade. It also means selecting hardware that’s easily upgradable and expandable.</p><h2>Making hardware choices a whole lot easier</h2><p>To support technology education, MSI offers the <a href="https://us.msi.com/Business-Productivity-PCs/Products">Cubi NUC and DP21</a>, which support Intel vPro and Windows Autopilot to simplify management, enhance security and streamline the deployment process. Thunderbolt 4 technology and power delivery offer fast connectivity and charging. They’re also easily scalable, and offer real-time data processing for AI and machine learning. Their compact size offers flexible installation and a good performance vs. footprint ratio, plus flexible configuration.</p><p>The company also offers STEM, gaming and content creation computers like the DP180, <a href="https://www.msi.com/Content-Creation/CreatorPro-X18-HX-A14VX">CreatorPro</a>, <a href="https://www.msi.com/Laptops">Vector GP and Raider GE series laptops</a> with dedicated graphics hardware that accelerate graphics-heavy applications, and offer easy upgradability with expandable memory and storage options to ensure longevity.</p><p>Veteran resellers and manufacturers will work with decision-makers to ensure schools get the best hardware and software their money can buy, plus keep IT teams in the loop what’s coming next, and how to make sure students have every opportunity to learn with the newest technology possible.</p><p>“We’re paving a path for these students into the future, and it’s important that we’re equipping them for everything that’s to come,” Holley says. “Gaming and other high-tech hardware has become an integral part of the plan, so IT leaders must be willing to get creative when designing technology resources and work with allies across manufacturing and reselling to push initiatives forward.”</p><p><b><i>Dig deeper: </i></b><a href="https://us.msi.com/Business-Productivity-PCs/Products"><b><i>Learn more here</i></b></a><b><i> about the technology solutions that power today’s educational experiences.</i></b></p><hr/><p>Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact <a href="mailto:sales@venturebeat.com">sales@venturebeat.com</a></p>]]></description>
            <category>Business</category>
            <category>Programming &amp; Development</category>
            <category>Virtual Comms &amp; Collab</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/2EbLCMDG7kWP0uAxc3fR05/b6f873f7a2174cb619ff429ef5883bed/classroom.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
    </channel>
</rss>