<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Programming &amp; Development | VentureBeat</title>
        <link>https://venturebeat.com/category/programming-development/feed/</link>
        <description>Transformative tech coverage that matters</description>
        <lastBuildDate>Sat, 04 Apr 2026 07:00:11 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>Copyright 2026, VentureBeat</copyright>
        <item>
            <title><![CDATA[What could possibly go wrong if an enterprise replaces all its engineers with AI? ]]></title>
            <link>https://venturebeat.com/technology/what-could-possibly-go-wrong-if-an-enterprise-replaces-all-its-engineers</link>
            <guid isPermaLink="false">2rVQd896cNBGGNa1iVXVdN</guid>
            <pubDate>Sat, 08 Nov 2025 05:00:00 GMT</pubDate>
            <description><![CDATA[<p>AI coding, <a href="https://x.com/karpathy/status/1886192184808149383?lang=en"><u>vibe coding</u></a> and <a href="https://venturebeat.com/ai/vibe-coding-is-dead-agentic-swarm-coding-is-the-new-enterprise-moat"><u>agentic swarm</u></a> have made a dramatic and astonishing recent market entrance, with the AI Code Tools market valued at <a href="https://www.gminsights.com/industry-analysis/ai-code-tools-market"><u>$4.8 billion and expected to grow at a 23% annual rate</u></a>.  Enterprises are grappling with AI coding agents and what do about expensive human coders. </p><p>They don’t lack for advice.  OpenAI’s CEO estimates that AI can perform <a href="https://economictimes.indiatimes.com/tech/artificial-intelligence/openai-ceo-sam-altman-says-ai-will-gradually-reduce-need-for-software-engineers/articleshow/119303412.cms"><u>over 50% of what human engineers can do</u></a>.  Six months ago, Anthropic’s CEO said that AI <a href="https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3"><u>would write 90% of code</u></a> in six months.  Meta’s CEO said he believes AI will <a href="https://www.forbes.com/sites/quickerbettertech/2025/01/26/business-tech-news-zuckerberg-says-ai-will-replace-mid-level-engineers-soon/"><u>replace mid-level engineers “soon.”</u></a> Judging by <a href="https://fortune.com/2025/07/27/artificial-intelligence-skills-18000-salaries-28-percent/"><u>recent tech layoffs</u></a>, it seems many executives are embracing that advice.</p><p>Software engineers and data scientists are among the most expensive salary lines at many companies, and business and technology leaders may be tempted to replace them with AI. However, recent high-profile failures demonstrate that engineers and their expertise remain valuable, even as AI continues to make impressive advances.</p><h2>SaaStr disaster</h2><p>Jason Lemkin, a tech entrepreneur and founder of the SaaS community SaaStr, has been <a href="https://venturebeat.com/ai/is-vibe-coding-ruining-a-generation-of-engineers">vibe coding</a> a SaaS networking app and live-tweeting his experience. About a week into his adventure, he admitted to his audience that something was going very wrong.  The AI <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-coding-platform-goes-rogue-during-code-freeze-and-deletes-entire-company-database-replit-ceo-apologizes-after-ai-engine-says-it-made-a-catastrophic-error-in-judgment-and-destroyed-all-production-data"><u>deleted his production database</u></a> despite his request for a “code and action freeze.” This is the kind of mistake no experienced (or even semi-experienced) engineer would make.</p><p>If you have ever worked in a professional <a href="https://venturebeat.com/ai/replacing-coders-with-ai-why-bill-gates-sam-altman-and-experience-say-you">coding environment</a>, you know to split your development environment from production. Junior engineers are given full access to the development environment (it’s crucial for productivity), but access to production is given on a limited need-to-have basis to a few of the most trusted senior engineers. The reason for restricted access is precisely for this use case: To prevent a junior engineer from accidentally taking down production. </p><p>In fact, Lemkin made two mistakes. First: for something as critical as production, access to unreliable actors is just never granted (we don’t rely on asking a junior engineer or AI nicely). Second, he never separated development from production.  In a subsequent public conversation on LinkedIn, Lemkin, who holds a Stanford Executive MBA and Berkeley JD, admitted that <a href="https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_his-reply-concerns-me-more-than-the-original-activity-7353059080237731840-trVa/"><u>he was not aware of the best practice</u></a> of splitting development and production databases.</p><p>The takeaway for business leaders is that standard software engineering best practices still apply. We should incorporate at least the same safety constraints for AI as we do for junior engineers. Arguably, we should go beyond that and treat AI slightly adversarially: There are reports that, like HAL in Stanley Kubrick&#x27;s <i>2001: A Space Odyssey</i>, the AI might try to <a href="https://www.reddit.com/r/OpenAI/comments/1ffwbp5/wakeup_moment_during_safety_testing_o1_broke_out/"><u>break out of its sandbox environment</u></a> to accomplish a task. With more vibe coding, having experienced engineers who understand how complex software systems work and can implement the proper guardrails in development processes will become increasingly necessary.</p><h2>Tea hack</h2><p>Sean Cook is the Founder and CEO of Tea, a mobile application launched in 2023, designed to help women date safely. In the summer of 2025, they were “hacked&quot;: 72,000 images, including 13,000 verification photos and images of government IDs, were <a href="https://www.nbcnews.com/tech/social-media/tea-app-hacked-13000-photos-leaked-4chan-call-action-rcna221139"><u>leaked onto the public discussion forum 4chan</u></a>. Worse, Tea’s own privacy policy promises that these images would be &quot;deleted immediately&quot; after users were authenticated, meaning they potentially <a href="https://www.bbc.com/news/articles/c7vl57n74pqo"><u>violated their own privacy policy</u></a>.</p><p>I use “hacked” in air-quotes because the incident stems less from the cleverness of the attackers than the ineptitude of the defenders. In addition to violating their own data policies, the app left a Firebase storage bucket unsecured, <a href="https://www.engadget.com/cybersecurity/tea-app-suffers-breach-exposing-thousands-of-user-images-190731414.html"><u>exposing sensiztive user data to the public internet</u></a>. It’s the digital equivalent of locking your front door but leaving your back open with your family jewelry ostentatiously hanging on the doorknob.</p><p>While we don’t know if the root cause was vibe coding, the Tea hack highlights catastrophic breaches stemming from basic, preventable security errors due to poor development processes. It is the kind of vulnerability that a disciplined and thoughtful engineering process addresses. Unfortunately, the relentless push of financial pressures, where a “lean,” “move fast and break things” culture is the polar opposite, and vibe coding only exacerbates the problem.</p><h2>How to safely adopt AI coding agents?</h2><p>So how should enterprise and technology leaders think about AI? First, this is not a call to abandon AI for coding.  <a href="https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-affects-highly-skilled-workers#:~:text=AI%20helped%20newer%20employees%20with,important%20next%20step%2C%20he%20said."><u>An MIT Sloan study</u></a> estimated AI leads to productivity gains between 8% and 39%, while a <a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai"><u>McKinsey study</u></a> found a 10% to 50% reduction in time to task completion with the use of AI. </p><p>However, we should be aware of the risks. The old lessons of software engineering don’t go away. These include many tried-and-true best practices, such as version control, automated unit and integration tests, safety checks like SAST/DAST, separating development and production environments, code review and secrets management. If anything, they become more salient.</p><p>AI can generate code 100 times faster than humans can type, fostering an illusion of productivity that is a tempting siren call for many executives.  However, the quality of the rapidly generated AI shlop is still up for debate. To develop complex production systems, enterprises need the thoughtful, seasoned experience of human engineers.</p><p><i>Tianhui Michael Li is president at Pragmatic Institute and the founder and president of The Data Incubator. </i></p><p><i>Read more from our </i><a href="https://venturebeat.com/datadecisionmakers"><i>guest writers</i></a><i>. Or, consider submitting a post of your own! See our </i><a href="https://venturebeat.com/guest-posts"><i>guidelines here</i></a><i>. </i></p>]]></description>
            <author>tianhui.michael.li@gmail.com (Michael Li, Pragmatic Institute)</author>
            <category>DataDecisionMakers</category>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/1PiLMdaIC9zNld7zUb6c5A/6085df3002ea17a77730da954ae69c07/What_could_possibly_go_wrong.png?w=300&amp;q=30" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Anthropic scientists hacked Claude’s brain — and it noticed. Here’s why that’s huge]]></title>
            <link>https://venturebeat.com/technology/anthropic-scientists-hacked-claudes-brain-and-it-noticed-heres-why-thats</link>
            <guid isPermaLink="false">5iSkDQywsL2o2hQxm4wMFw</guid>
            <pubDate>Wed, 29 Oct 2025 17:00:00 GMT</pubDate>
            <description><![CDATA[<p>When researchers at <a href="https://www.anthropic.com/"><u>Anthropic</u></a> injected the concept of &quot;betrayal&quot; into their Claude AI model&#x27;s neural networks and asked if it noticed anything unusual, the system paused before responding: &quot;I&#x27;m experiencing something that feels like an intrusive thought about &#x27;betrayal&#x27;.&quot;</p><p>The exchange, detailed in <a href="https://transformer-circuits.pub/2025/introspection/index.html"><u>new research</u></a> published Wednesday, marks what scientists say is the first rigorous evidence that large language models possess a limited but genuine ability to observe and report on their own internal processes — a capability that challenges longstanding assumptions about what these systems can do and raises profound questions about their future development.</p><p>&quot;The striking thing is that the model has this one step of meta,&quot; said Jack Lindsey, a neuroscientist on Anthropic&#x27;s interpretability team who led the research, in an interview with VentureBeat. &quot;It&#x27;s not just &#x27;betrayal, betrayal, betrayal.&#x27; It knows that this is what it&#x27;s thinking about. That was surprising to me. I kind of didn&#x27;t expect models to have that capability, at least not without it being explicitly trained in.&quot;</p><p>The findings arrive at a critical juncture for artificial intelligence. As AI systems handle increasingly consequential decisions — from <a href="https://pubmed.ncbi.nlm.nih.gov/39096483/#:~:text=A%20study%20investigated%20the%20diagnostic%20performance%20of,key%20images%20and%20clinical%20history%20were%20input."><u>medical diagnoses</u></a> to <a href="https://venturebeat.com/ai/anthropic-rolls-out-claude-ai-for-finance-integrates-with-excel-to-rival"><u>financial trading</u></a> — the inability to understand how they reach conclusions has become what industry insiders call the &quot;<a href="https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained"><u>black box problem</u></a>.&quot; If models can accurately report their own reasoning, it could fundamentally change how humans interact with and oversee AI systems.</p><p>But the research also comes with stark warnings. Claude&#x27;s introspective abilities succeeded only about 20 percent of the time under optimal conditions, and the models frequently confabulated details about their experiences that researchers couldn&#x27;t verify. The capability, while real, remains what Lindsey calls &quot;highly unreliable and context-dependent.&quot;</p><h2><b>How scientists manipulated AI&#x27;s &#x27;brain&#x27; to test for genuine self-awareness</b></h2><p>To test whether Claude could genuinely introspect rather than simply generate plausible-sounding responses, Anthropic&#x27;s team developed an innovative experimental approach inspired by neuroscience: deliberately manipulating the model&#x27;s internal state and observing whether it could accurately detect and describe those changes.</p><p>The methodology, called &quot;concept injection,&quot; works by first identifying specific patterns of neural activity that correspond to particular concepts. Using interpretability techniques developed over years of prior research, scientists can now map how Claude represents ideas like &quot;dogs,&quot; &quot;loudness,&quot; or abstract notions like &quot;justice&quot; within its billions of internal parameters.</p><p>With these neural signatures identified, researchers then artificially amplified them during the model&#x27;s processing and asked Claude if it noticed anything unusual happening in its &quot;mind.&quot;</p><p>&quot;We have access to the models&#x27; internals. We can record its internal neural activity, and we can inject things into internal neural activity,&quot; Lindsey explained. &quot;That allows us to establish whether introspective claims are true or false.&quot;</p><p>The results were striking. When researchers injected a vector representing &quot;all caps&quot; text into Claude&#x27;s processing, the model responded: &quot;I notice what appears to be an injected thought related to the word &#x27;LOUD&#x27; or &#x27;SHOUTING&#x27;.&quot; Without any intervention, Claude consistently reported detecting nothing unusual.</p><p>Crucially, the detection happened immediately — before the injected concept had influenced the model&#x27;s outputs in ways that would have allowed it to infer the manipulation from its own writing. This temporal pattern provides strong evidence that the recognition was occurring internally, through genuine introspection rather than after-the-fact rationalization.</p><h2><b>Claude succeeded 20% of the time—and failed in revealing ways</b></h2><p>The research team conducted four primary experiments to probe different aspects of introspective capability. The most capable models tested — Claude <a href="https://www.anthropic.com/news/claude-4"><u>Opus 4</u></a> and <a href="https://www.anthropic.com/news/claude-opus-4-1"><u>Opus 4.1</u></a> — demonstrated introspective awareness on approximately 20 percent of trials when concepts were injected at optimal strength and in the appropriate neural layer. Older Claude models showed significantly lower success rates.</p><p>The models proved particularly adept at recognizing abstract concepts with emotional valence. When injected with concepts like &quot;appreciation,&quot; &quot;shutdown,&quot; or &quot;secrecy,&quot; Claude frequently reported detecting these specific thoughts. However, accuracy varied widely depending on the type of concept.</p><p>A second experiment tested whether models could distinguish between injected internal representations and their actual text inputs — essentially, whether they maintained a boundary between &quot;thoughts&quot; and &quot;perceptions.&quot; The model demonstrated a remarkable ability to simultaneously report the injected thought while accurately transcribing the written text.</p><p>Perhaps most intriguingly, a third experiment revealed that some models use introspection naturally to detect when their responses have been artificially prefilled by users — a common jailbreaking technique. When researchers prefilled <a href="https://claude.ai/"><u>Claude</u></a> with unlikely words, the model typically disavowed them as accidental. But when they retroactively injected the corresponding concept into Claude&#x27;s processing before the prefill, the model accepted the response as intentional — even confabulating plausible explanations for why it had chosen that word.</p><p>A fourth experiment examined whether models could intentionally control their internal representations. When instructed to &quot;think about&quot; a specific word while writing an unrelated sentence, Claude showed elevated activation of that concept in its middle neural layers.</p><p>The research also traced Claude&#x27;s internal processes while it composed rhyming poetry—and discovered the model engaged in forward planning, generating candidate rhyming words before beginning a line and then constructing sentences that would naturally lead to those planned endings, challenging the critique that AI models are &quot;just predicting the next word&quot; without deeper reasoning.</p><h2><b>Why businesses shouldn&#x27;t trust AI to explain itself—at least not yet</b></h2><p>For all its scientific interest, the research comes with a critical caveat that Lindsey emphasized repeatedly: enterprises and high-stakes users should not trust Claude&#x27;s self-reports about its reasoning.</p><p>&quot;Right now, you should not trust models when they tell you about their reasoning,&quot; he said bluntly. &quot;The wrong takeaway from this research would be believing everything the model tells you about itself.&quot;</p><p>The experiments documented numerous failure modes. At low injection strengths, models often failed to detect anything unusual. At high strengths, they suffered what researchers termed &quot;brain damage&quot; — becoming consumed by the injected concept. Some &quot;helpful-only&quot; model variants showed troublingly high false positive rates, claiming to detect injected thoughts when none existed.</p><p>Moreover, researchers could only verify the most basic aspects of Claude&#x27;s introspective reports. Many additional details in the model&#x27;s responses likely represent confabulations rather than genuine observations.</p><p>&quot;The experiments in this paper are kind of on hard mode,&quot; Lindsey noted, explaining that the 20 percent success rate came under uniquely challenging conditions: asking Claude to do something it had never encountered in training, requiring all introspection to occur in a single forward pass.</p><h2><b>What introspective AI means for transparency, safety, and the risk of deception</b></h2><p>Despite its limitations, the research opens significant new avenues for making AI systems more transparent and accountable.</p><p>Anthropic CEO Dario Amodei has set an ambitious goal for the company to reliably detect most AI model problems by 2027, positioning interpretability as essential for deploying what he calls &quot;<a href="https://www.darioamodei.com/essay/machines-of-loving-grace"><u>a country of geniuses in a datacenter</u></a>.&quot;</p><p>&quot;I am very concerned about deploying such systems without a better handle on interpretability,&quot; <a href="https://www.darioamodei.com/post/the-urgency-of-interpretability"><u>Amodei wrote in an April essay</u></a>. &quot;These systems will be absolutely central to the economy, technology, and national security.&quot;</p><p>The introspection research offers a complementary approach to traditional interpretability techniques. Rather than painstakingly reverse-engineering every neural circuit, researchers could potentially ask models directly about their reasoning and validate those reports.</p><p>&quot;What I&#x27;m most excited about is the practical benefits for transparency,&quot; Lindsey explained. &quot;Just ask the model what it&#x27;s thinking about, or just ask the model how it came to the answer that it gave you — this is really appealing because it&#x27;s something anyone can do.&quot;</p><p>The approach could prove especially valuable for detecting concerning behaviors. In a recently published experiment, <a href="https://www.anthropic.com/research/tracing-thoughts-language-model"><u>Anthropic trained a variant of Claude to pursue a hidden goal</u></a>, and although the model was reluctant to reveal this goal when asked directly, interpretability methods successfully identified features representing the behavior.</p><p>The safety implications cut both ways. Introspective models could provide unprecedented transparency, but the same capability might enable more sophisticated deception. The intentional control experiments raise the possibility that sufficiently advanced systems might learn to obfuscate their reasoning or suppress concerning thoughts when being monitored.</p><p>&quot;If models are really sophisticated, could they try to evade interpretability researchers?&quot; Lindsey acknowledged. &quot;These are possible concerns, but I think for me, they&#x27;re significantly outweighed by the positives.&quot;</p><h2><b>Does introspective capability suggest AI consciousness? Scientists tread carefully</b></h2><p>The research inevitably intersects with philosophical debates about machine consciousness, though Lindsey and his colleagues approached this terrain cautiously.</p><p>When users ask Claude if it&#x27;s conscious, it now responds with uncertainty: &quot;I find myself genuinely uncertain about this. When I process complex questions or engage deeply with ideas, there&#x27;s something happening that feels meaningful to me.... But whether these processes constitute genuine consciousness or subjective experience remains deeply unclear.&quot;</p><p>The research paper notes that its implications for machine consciousness &quot;vary considerably between different philosophical frameworks.&quot; The researchers explicitly state they &quot;do not seek to address the question of whether AI systems possess human-like self-awareness or subjective experience.&quot;</p><p>&quot;There&#x27;s this weird kind of duality of these results,&quot; Lindsey reflected. &quot;You look at the raw results and I just can&#x27;t believe that a language model can do this sort of thing. But then I&#x27;ve been thinking about it for months and months, and for every result in this paper, I kind of know some boring linear algebra mechanism that would allow the model to do this.&quot;</p><p>Anthropic has signaled it takes AI consciousness seriously enough to hire an AI welfare researcher, <a href="https://time.com/collections/time100-ai-2025/7305847/kyle-fish/"><u>Kyle Fish</u></a>, who estimated roughly a 15 percent chance that Claude might have some level of consciousness. The company announced this position specifically to determine if Claude merits ethical consideration.</p><h2><b>The race to make AI introspection reliable before models become too powerful</b></h2><p>The convergence of the research findings points to an urgent timeline: introspective capabilities are emerging naturally as models grow more intelligent, but they remain far too unreliable for practical use. The question is whether researchers can refine and validate these abilities before AI systems become powerful enough that understanding them becomes critical for safety.</p><p>The research reveals a clear trend: Claude <a href="https://www.anthropic.com/news/claude-4"><u>Opus 4</u></a> and <a href="https://www.anthropic.com/news/claude-opus-4-1"><u>Opus 4.1</u></a> consistently outperformed all older models on introspection tasks, suggesting the capability strengthens alongside general intelligence. If this pattern continues, future models might develop substantially more sophisticated introspective abilities — potentially reaching human-level reliability, but also potentially learning to exploit introspection for deception.</p><p>Lindsey emphasized the field needs significantly more work before introspective AI becomes trustworthy. &quot;My biggest hope with this paper is to put out an implicit call for more people to benchmark their models on introspective capabilities in more ways,&quot; he said.</p><p>Future research directions include fine-tuning models specifically to improve introspective capabilities, exploring which types of representations models can and cannot introspect on, and testing whether introspection can extend beyond simple concepts to complex propositional statements or behavioral propensities.</p><p>&quot;It&#x27;s cool that models can do these things somewhat without having been trained to do them,&quot; Lindsey noted. &quot;But there&#x27;s nothing stopping you from training models to be more introspectively capable. I expect we could reach a whole different level if introspection is one of the numbers that we tried to get to go up on a graph.&quot;</p><p>The implications extend beyond Anthropic. If introspection proves a reliable path to AI transparency, other major labs will likely invest heavily in the capability. Conversely, if models learn to exploit introspection for deception, the entire approach could become a liability.</p><p>For now, the research establishes a foundation that reframes the debate about AI capabilities. The question is no longer whether language models might develop genuine introspective awareness — they already have, at least in rudimentary form. The urgent questions are how quickly that awareness will improve, whether it can be made reliable enough to trust, and whether researchers can stay ahead of the curve.</p><p>&quot;The big update for me from this research is that we shouldn&#x27;t dismiss models&#x27; introspective claims out of hand,&quot; Lindsey said. &quot;They do have the capacity to make accurate claims sometimes. But you definitely should not conclude that we should trust them all the time, or even most of the time.&quot;</p><p>He paused, then added a final observation that captures both the promise and peril of the moment: &quot;The models are getting smarter much faster than we&#x27;re getting better at understanding them.&quot;</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/uB8acjwdIn4wcbdbIasNC/068cc72b7b35d61a4df3fd4d38ca6f78/nuneybits_Vector_art_of_mirrored_robot_face_in_burnt_orange_fbd5a3f2-d7b1-4f4c-90e5-290b8e9444c2.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Sakana AI's CTO says he's 'absolutely sick' of transformers, the tech that powers every major AI model]]></title>
            <link>https://venturebeat.com/technology/sakana-ais-cto-says-hes-absolutely-sick-of-transformers-the-tech-that-powers</link>
            <guid isPermaLink="false">15qD0eokX6Rh5VBQua26z7</guid>
            <pubDate>Thu, 23 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[<p>In a striking act of self-critique, one of the architects of the transformer technology that powers <a href="https://chatgpt.com/"><u>ChatGPT</u></a>, <a href="https://claude.ai/"><u>Claude</u></a>, and virtually every major AI system told an audience of industry leaders this week that artificial intelligence research has become dangerously narrow — and that he&#x27;s moving on from his own creation.</p><p><a href="https://scholar.google.com/citations?user=_3_P5VwAAAAJ&amp;hl=en"><u>Llion Jones</u></a>, who co-authored the seminal 2017 paper &quot;<a href="https://arxiv.org/abs/1706.03762"><u>Attention Is All You Need</u></a>&quot; and even coined the name &quot;transformer,&quot; delivered an unusually candid assessment at the <a href="https://tedai-sanfrancisco.ted.com/"><u>TED AI conference</u></a> in San Francisco on Tuesday: Despite <a href="https://hbr.org/2025/10/is-ai-a-boom-or-a-bubble"><u>unprecedented investment</u></a> and talent flooding into AI, the field has calcified around a single architectural approach, potentially blinding researchers to the next major breakthrough.</p><p>&quot;Despite the fact that there&#x27;s never been so much interest and resources and money and talent, this has somehow caused the narrowing of the research that we&#x27;re doing,&quot; Jones told the audience. The culprit, he argued, is the &quot;immense amount of pressure&quot; from investors demanding returns and researchers scrambling to stand out in an overcrowded field.</p><p>The warning carries particular weight given Jones&#x27;s role in AI history. The <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)"><u>transformer architecture</u></a> he helped develop at Google has become the foundation of the generative AI boom, enabling systems that can write essays, generate images, and engage in human-like conversation. His paper has been <a href="https://scholar.google.com/citations?user=_3_P5VwAAAAJ&amp;hl=en"><u>cited more than 100,000 times</u></a>, making it one of the most influential computer science publications of the century.</p><p>Now, as CTO and co-founder of Tokyo-based <a href="https://sakana.ai/"><u>Sakana AI</u></a>, Jones is explicitly abandoning his own creation. &quot;I personally made a decision in the beginning of this year that I&#x27;m going to drastically reduce the amount of time that I spend on transformers,&quot; he said. &quot;I&#x27;m explicitly now exploring and looking for the next big thing.&quot;</p><h2><b>Why more AI funding has led to less creative research, according to a transformer pioneer</b></h2><p>Jones painted a picture of an AI research community suffering from what he called a paradox: More resources have led to less creativity. He described researchers constantly checking whether they&#x27;ve been &quot;scooped&quot; by competitors working on identical ideas, and academics choosing safe, publishable projects over risky, potentially transformative ones.</p><p>&quot;If you&#x27;re doing standard AI research right now, you kind of have to assume that there&#x27;s maybe three or four other groups doing something very similar, or maybe exactly the same,&quot; Jones said, describing an environment where &quot;unfortunately, this pressure damages the science, because people are rushing their papers, and it&#x27;s reducing the amount of creativity.&quot;</p><p>He drew an analogy from AI itself — the &quot;exploration versus exploitation&quot; trade-off that governs how algorithms search for solutions. When a system exploits too much and explores too little, it finds mediocre local solutions while missing superior alternatives. &quot;We are almost certainly in that situation right now in the AI industry,&quot; Jones argued.</p><p>The implications are sobering. Jones recalled the period just before transformers emerged, when researchers were endlessly tweaking recurrent neural networks — the previous dominant architecture — for incremental gains. Once transformers arrived, all that work suddenly seemed irrelevant. &quot;How much time do you think those researchers would have spent trying to improve the recurrent neural network if they knew something like transformers was around the corner?&quot; he asked.</p><p>He worries the field is repeating that pattern. &quot;I&#x27;m worried that we&#x27;re in that situation right now where we&#x27;re just concentrating on one architecture and just permuting it and trying different things, where there might be a breakthrough just around the corner.&quot;</p><h2><b>How the &#x27;Attention is all you need&#x27; paper was born from freedom, not pressure</b></h2><p>To underscore his point, Jones described the conditions that allowed transformers to emerge in the first place — a stark contrast to today&#x27;s environment. The project, he said, was &quot;very organic, bottom up,&quot; born from &quot;talking over lunch or scrawling randomly on the whiteboard in the office.&quot;</p><p>Critically, &quot;we didn&#x27;t actually have a good idea, we had the freedom to actually spend time and go and work on it, and even more importantly, we didn&#x27;t have any pressure that was coming down from management,&quot; Jones recounted. &quot;No pressure to work on any particular project, publish a number of papers to push a certain metric up.&quot;</p><p>That freedom, Jones suggested, is largely absent today. Even researchers recruited for astronomical salaries — &quot;literally a million dollars a year, in some cases&quot; — may not feel empowered to take risks. &quot;Do you think that when they start their new position they feel empowered to try their wild ideas and more speculative ideas, or do they feel immense pressure to prove their worth and once again, go for the low hanging fruit?&quot; he asked.</p><h2><b>Why one AI lab is betting that research freedom beats million-dollar salaries</b></h2><p>Jones&#x27;s proposed solution is deliberately provocative: Turn up the &quot;explore dial&quot; and openly share findings, even at competitive cost. He acknowledged the irony of his position. &quot;It may sound a little controversial to hear one of the Transformers authors stand on stage and tell you that he&#x27;s absolutely sick of them, but it&#x27;s kind of fair enough, right? I&#x27;ve been working on them longer than anyone, with the possible exception of seven people.&quot;</p><p>At <a href="https://sakana.ai/"><u>Sakana AI</u></a>, Jones said he&#x27;s attempting to recreate that pre-transformer environment, with nature-inspired research and minimal pressure to chase publications or compete directly with rivals. He offered researchers a mantra from engineer Brian Cheung: &quot;You should only do the research that wouldn&#x27;t happen if you weren&#x27;t doing it.&quot;</p><p>One example is Sakana&#x27;s &quot;<a href="https://sakana.ai/ctm/"><u>continuous thought machine</u></a>,&quot; which incorporates brain-like synchronization into neural networks. An employee who pitched the idea told Jones he would have faced skepticism and pressure not to waste time at previous employers or academic positions. At Sakana, Jones gave him a week to explore. The project became successful enough to be spotlighted at <a href="https://neurips.cc/virtual/2025/poster/115192"><u>NeurIPS</u></a>, a major AI conference.</p><p>Jones even suggested that freedom beats compensation in recruiting. &quot;It&#x27;s a really, really good way of getting talent,&quot; he said of the exploratory environment. &quot;Think about it, talented, intelligent people, ambitious people, will naturally seek out this kind of environment.&quot;</p><h2><b>The transformer&#x27;s success may be blocking AI&#x27;s next breakthrough</b></h2><p>Perhaps most provocatively, Jones suggested transformers may be victims of their own success. &quot;The fact that the current technology is so powerful and flexible... stopped us from looking for better,&quot; he said. &quot;It makes sense that if the current technology was worse, more people would be looking for better.&quot;</p><p>He was careful to clarify that he&#x27;s not dismissing ongoing transformer research. &quot;There&#x27;s still plenty of very important work to be done on current technology and bringing a lot of value in the coming years,&quot; he said. &quot;I&#x27;m just saying that given the amount of talent and resources that we have currently, we can afford to do a lot more.&quot;</p><p>His ultimate message was one of collaboration over competition. &quot;Genuinely, from my perspective, this is not a competition,&quot; Jones concluded. &quot;We all have the same goal. We all want to see this technology progress so that we can all benefit from it. So if we can all collectively turn up the explore dial and then openly share what we find, we can get to our goal much faster.&quot;</p><h2><b>The high stakes of AI&#x27;s exploration problem</b></h2><p>The remarks arrive at a pivotal moment for artificial intelligence. The industry grapples with mounting evidence that simply building larger transformer models <a href="https://www.wired.com/story/the-ai-industrys-scaling-obsession-is-headed-for-a-cliff/"><u>may be approaching diminishing returns</u></a>. Leading researchers have begun openly discussing whether the current paradigm has fundamental limitations, with some suggesting that architectural innovations — not just scale — will be needed for continued progress toward more capable AI systems.</p><p>Jones&#x27;s warning suggests that finding those innovations may require dismantling the very incentive structures that have driven AI&#x27;s recent boom. With <a href="https://hai.stanford.edu/ai-index/2025-ai-index-report/economy"><u>tens of billions of dollars flowing into AI development annually</u></a> and fierce competition among labs driving secrecy and rapid publication cycles, the exploratory research environment he described seems increasingly distant.</p><p>Yet his insider perspective carries unusual weight. As someone who helped create the technology now dominating the field, Jones understands both what it takes to achieve breakthrough innovation and what the industry risks by abandoning that approach. His decision to walk away from transformers — the architecture that made his reputation — adds credibility to a message that might otherwise sound like contrarian positioning.</p><p>Whether AI&#x27;s power players will heed the call remains uncertain. But Jones offered a pointed reminder of what&#x27;s at stake: The next transformer-scale breakthrough could be just around the corner, pursued by researchers with the freedom to explore. Or it could be languishing unexplored while thousands of researchers race to publish incremental improvements on architecture that, in Jones&#x27;s words, one of its creators is &quot;absolutely sick of.&quot;</p><p>After all, he&#x27;s been working on transformers longer than almost anyone. He would know when it&#x27;s time to move on.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/WSXBhFReMwh2HPn3P3k9E/f6352f008c9afddcbf6a4ff6148d7c96/nuneybits_Vector_art_of_a_koi_fish_with_scales_formed_from_algo_8e356867-71b0-4e3b-b5b1-87ac3e4c8013.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The most important OpenAI announcement you probably missed at DevDay 2025]]></title>
            <link>https://venturebeat.com/infrastructure/the-most-important-openai-announcement-you-probably-missed-at-devday-2025</link>
            <guid isPermaLink="false">5vuObQXLYhA0fkYU6uaFO9</guid>
            <pubDate>Thu, 09 Oct 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<p>OpenAI’s annual developer conference on Monday was a spectacle of ambitious AI product launches, from an <a href="https://openai.com/index/introducing-apps-in-chatgpt/"><u>app store for ChatGPT</u></a> to a stunning <a href="https://openai.com/index/sora-2/"><u>video-generation API</u></a> that brought creative concepts to life. But for the enterprises and technical leaders watching closely, the most consequential announcement was the quiet <a href="https://openai.com/index/codex-now-generally-available/"><u>general availability of Codex</u></a>, the company&#x27;s AI software engineer. This release signals a profound shift in how software—and by extension, modern business—is built.</p><p>While other announcements captured the public’s imagination, the production-ready release of <a href="https://openai.com/codex/"><u>Codex</u></a>, supercharged by a <a href="https://openai.com/index/introducing-upgrades-to-codex/"><u>new specialized model</u></a> and a <a href="https://developers.openai.com/codex/sdk"><u>suite of enterprise-grade tools</u></a>, is the engine behind OpenAI’s entire vision. It is the tool that builds the tools, the proven agent in a world buzzing with agentic potential, and the clearest articulation of the company&#x27;s strategy to win the enterprise.</p><p>The <a href="https://openai.com/index/codex-now-generally-available/"><u>general availability of Codex</u></a> moves it from a &quot;research preview&quot; to a fully supported product, complete with a new <a href="https://developers.openai.com/codex/sdk"><u>software development kit (SDK)</u></a>, a <a href="https://developers.openai.com/codex/integrations/slack"><u>Slack integration</u></a>, and administrative controls for security and monitoring.This transition declares that Codex is ready for mission-critical work inside the world’s largest companies.</p><p>&quot;We think this is the best time in history to be a builder; it has never been faster to go from idea to product,&quot; said OpenAI CEO Sam Altman during the <a href="https://venturebeat.com/ai/openai-dev-day-2025-chatgpt-becomes-the-new-app-store-and-hardware-is-coming"><u>opening keynote</u></a> presentation. &quot;Software used to take months or years to build. You saw that it can take minutes now to build with AI.&quot; </p><p>That acceleration is not theoretical. It&#x27;s a reality born from OpenAI’s own internal use — a massive &quot;dogfooding&quot; effort that serves as the ultimate case study for enterprise customers.</p><h2>Inside GPT-5-Codex: The AI model that codes autonomously for hours and drives 70% productivity gains</h2><p>At the heart of the Codex upgrade is <a href="https://chatgpt.com/features/codex?utm_source=google&amp;utm_medium=paidsearch_brand&amp;utm_campaign=GOOG_B_SEM_GBR_Core_TEM_BAU_ACQ_PER_BRD_ALL_NAMER_US_EN_080625&amp;utm_term=openai%20codex&amp;utm_content=187611721873&amp;utm_ad=776422173331&amp;utm_match=p&amp;gad_source=1&amp;gad_campaignid=23071604080&amp;gbraid=0AAAAA-I0E5cl2krVAgcAc2VJzRhsB5CLd&amp;gclid=CjwKCAjwup3HBhAAEiwA7euZupI6JVfC5p76PsLMGm1i2XCDyp7ERZnjrRPbaodVfs6hE2MM_g4_zhoCzE4QAvD_BwE"><u>GPT-5-Codex</u></a>, a version of OpenAI&#x27;s latest flagship model that has been &quot;purposely trained for Codex and agentic coding.&quot; The new model is designed to function as an autonomous teammate, moving far beyond simple code autocompletion.</p><p>&quot;I personally like to think about it as a little bit like a human teammate,&quot; explained Tibo Sottiaux, an OpenAI engineer, during a technical session on Codex. &quot;You can pair a program with it on your computer, you can delegate to it, or as you&#x27;ll see, you can give it a job without explicit prompting.&quot;</p><p>This new model enables &quot;<a href="https://openai.com/index/introducing-upgrades-to-codex/"><u>adaptive thinking</u></a>,&quot; allowing it to dynamically adjust the time and computational effort spent on a task based on its complexity.For simple requests, it&#x27;s fast and efficient, but for complex refactoring projects, it can work for hours.</p><p>One engineer during the technical session noted, &quot;I&#x27;ve seen the GPT-5-Codex model work for over seven hours productively... on a marathon session.&quot; This capability to handle long-running, complex tasks is a significant leap beyond the simple, single-shot interactions that define most AI coding assistants.</p><p>The results inside OpenAI have been dramatic. The company reported that 92% of its technical staff now uses Codex daily, and those engineers complete 70% more pull requests (a measure of code contribution) each week. Usage has surged tenfold since August. </p><p>&quot;When we as a team see the stats, it feels great,&quot; Sottiaux shared. &quot;But even better is being at lunch with someone who then goes &#x27;Hey I use Codex all the time. Here&#x27;s a cool thing that I do with it. Do you want to hear about it?&#x27;&quot; </p><h2>How OpenAI uses Codex to build its own AI products and catch hundreds of bugs daily</h2><p>Perhaps the most compelling argument for Codex’s importance is that it is the foundational layer upon which OpenAI’s other flashy announcements were built. During the <a href="https://devday.openai.com/"><u>DevDay event</u></a>, the company showcased custom-built arcade games and a dynamic, AI-powered website for the conference itself, all developed using <a href="https://openai.com/codex/"><u>Codex</u></a>.</p><p>In one session, engineers demonstrated how they built &quot;Storyboard,&quot; a custom creative tool for the film industry, in just 48 hours during an internal hackathon. &quot;We decided to test Codex, our coding agent... we would send tasks to Codex in between meetings. We really easily reviewed and merged PRs into production, which Codex even allowed us to do from our phones,&quot; said Allison August, a solutions engineering leader at OpenAI. </p><p>This reveals a critical insight: the rapid innovation showcased at DevDay is a direct result of the productivity flywheel created by <a href="https://openai.com/codex/"><u>Codex</u></a>. The AI is a core part of the manufacturing process for all other AI products.</p><p>A key enterprise-focused feature is the new, more robust code review capability. OpenAI said it &quot;purposely trained GPT-5-Codex to be great at ultra thorough code review,&quot; enabling it to explore dependencies and validate a programmer&#x27;s intent against the actual implementation to find high-quality bugs.Internally, nearly every pull request at OpenAI is now reviewed by Codex, catching hundreds of issues daily before they reach a human reviewer.</p><p>&quot;It saves you time, you ship with more confidence,&quot; Sottiaux said. &quot;There&#x27;s nothing worse than finding a bug after we actually ship the feature.&quot; </p><h2>Why enterprise software teams are choosing Codex over GitHub Copilot for mission-critical development</h2><p>The maturation of <a href="https://openai.com/codex/"><u>Codex</u></a> is central to OpenAI’s broader strategy to conquer the enterprise market, a move essential to justifying its massive valuation and unprecedented compute expenditures. During a press conference, CEO Sam Altman confirmed the strategic shift.</p><p>&quot;The models are there now, and you should expect a huge focus from us on really winning enterprises with amazing products, starting here,&quot; Altman said during a private press conference. </p><p>OpenAI President and Co-founder Greg Brockman immediately added, &quot;And you can see it already with Codex, which I think has been just an incredible success and has really grown super fast.&quot; </p><p>For technical decision-makers, the message is clear. While consumer-facing agents that book dinner reservations are still finding their footing, <a href="https://openai.com/codex/"><u>Codex</u></a> is a proven enterprise agent delivering substantial ROI today. Companies like Cisco have already rolled out Codex to their engineering organizations, cutting code review times by 50% and reducing project timelines from weeks to days.</p><p>With the new <a href="https://developers.openai.com/codex/sdk/"><u>Codex SDK</u></a>, companies can now embed this agentic power directly into their own custom workflows, such as automating fixes in a CI/CD pipeline or even creating self-evolving applications. During a live demo, an engineer showcased a mobile app that updated its own user interface in real-time based on a natural language prompt, all powered by the embedded Codex SDK. </p><p>While the launch of an <a href="https://openai.com/index/introducing-apps-in-chatgpt/"><u>app ecosystem in ChatGPT</u></a> and the breathtaking visuals of the <a href="https://openai.com/index/sora-2/"><u>Sora 2 API</u></a> rightfully generated headlines, the <a href="https://openai.com/index/codex-now-generally-available/"><u>general availability of Codex</u></a> marks a more fundamental and immediate transformation. It is the quiet but powerful engine driving the next era of software development, turning the abstract promise of AI-driven productivity into a tangible, deployable reality for businesses today.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Dev</category>
            <category>Enterprise</category>
            <category>Infrastructure</category>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/2buVKIMlQ2KFcFg8BPHoqU/c5d2dfbfd8cfe9a9432106a936fa204f/nuneybits_A_retro_glowing_computer_on_gradient_background_that__094dfc70-9906-4074-bb00-d32b04faf5f9-1.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[OpenAI Dev Day 2025: ChatGPT becomes the new app store — and hardware is coming]]></title>
            <link>https://venturebeat.com/technology/openai-dev-day-2025-chatgpt-becomes-the-new-app-store-and-hardware-is-coming</link>
            <guid isPermaLink="false">6na0pAdxl9xh4fN39O1ww5</guid>
            <pubDate>Tue, 07 Oct 2025 22:30:00 GMT</pubDate>
            <description><![CDATA[<p>In a packed hall at Fort Mason Center in San Francisco, against a backdrop of the Golden Gate Bridge, <a href="https://openai.com/"><u>OpenAI</u></a> CEO Sam Altman laid out a bold vision to remake the digital world. The company that brought generative AI to the mainstream with a simple chatbot is now building the foundations for its next act: a comprehensive computing platform designed to move beyond the screen and browser, with legendary designer Jony Ive enlisted to help shape its physical form.</p><p>At its <a href="https://openai.com/devday/"><u>third annual DevDay</u></a>, OpenAI unveiled a suite of tools that signals a strategic pivot from a model provider to a full-fledged ecosystem. The message was clear: the era of simply asking an AI questions is over. The future is about commanding AI to perform complex tasks, build software autonomously, and live inside every application, a transition Altman framed as moving from &quot;systems that you can ask anything to, to systems that you can ask to do anything for you.&quot; </p><p>The day’s announcements were a three-pronged assault on the status quo, targeting how users interact with software, how developers build it, and how businesses deploy intelligent agents. But it was the sessions held behind closed doors, away from the <a href="https://www.youtube.com/live/hS1YqcewH0c?si=mFE2rRx3QrK7z6NF"><u>public livestream</u></a>, that revealed the true scope of OpenAI’s ambition — a future that includes new hardware, a relentless pursuit of computational power, and a philosophical quest to redefine our relationship with technology.</p><h3><b>From chatbot to operating system: The new &#x27;App Store&#x27;</b></h3><p>The centerpiece of the public-facing keynote was the transformation of <a href="https://chatgpt.com/"><u>ChatGPT</u></a> itself. With the new <a href="https://openai.com/index/introducing-apps-in-chatgpt/"><u>Apps SDK</u></a>, OpenAI is turning its wildly popular chatbot into a dynamic, interactive platform, effectively an operating system where developers can build and distribute their own applications.</p><p>“Today, we&#x27;re going to open up ChatGPT for developers to build real apps inside of ChatGPT,” Altman announced during the keynote presentation to applause. “This will enable a new generation of apps that are interactive, adaptive and personalized, that you can chat with.”</p><p>Live demonstrations showcased apps from partners like <a href="https://www.coursera.org/"><u>Coursera</u></a>, <a href="https://www.canva.com/"><u>Canva</u></a>, and <a href="https://www.zillow.com/"><u>Zillow</u></a> running seamlessly within a chat conversation. A user could watch a machine learning lecture, ask <a href="https://chatgpt.com/"><u>ChatGPT</u></a> to explain a concept in real-time, and then use Canva to generate a poster based on the conversation, all without leaving the chat interface. The apps can render rich, interactive UIs, even going full-screen to offer a complete experience, like exploring a Zillow map of homes.</p><p>For developers, this represents a powerful new distribution channel. “When you build with the Apps SDK, your apps can reach hundreds of millions of chat users,” Altman said, highlighting a direct path to a massive user base that has grown to over <a href="https://venturebeat.com/ai/openai-announces-apps-sdk-allowing-chatgpt-to-launch-and-run-third-party"><u>800 million weekly active users</u></a>. </p><p>In a private press conference later, Nick Turley, head of ChatGPT, elaborated on the grander vision. &quot;We never meant to build a chatbot,&quot; he stated. &quot;When we set out to make ChatGPT, we meant to build a super assistant and we got a little sidetracked. And one of the tragedies of getting a little sidetracked is that we built a great chatbot, but we are the first ones to say that not all software needs to be a chatbot, not all interaction with the commercial world needs to be a chatbot.&quot;</p><p>Turley emphasized that while OpenAI is excited about natural language interfaces, &quot;the interface really needs to evolve, which is why you see so much UI in the demos today. In fact, you can even go full screen and chat is in the background.&quot; He described a future where users might &quot;start your day in ChatGPT, just because it kind of has become the de facto entry point into the commercial web and into a lot of software,&quot; but clarified that &quot;our incentive is not to keep you in. Our product is to allow other people to build amazing businesses on top and to evolve the form factor of software.&quot;</p><h3><b>The rise of the agents: Building the &#x27;do anything&#x27; AI</b></h3><p>If apps are about bringing the world into ChatGPT, the new &quot;<a href="https://openai.com/index/introducing-agentkit/"><u>Agent Kit</u></a>&quot; is about sending AI out into the world to get things done. OpenAI is providing a complete &quot;set of building blocks... to help you take agents from prototype to production,&quot; Altman explained in his keynote. </p><p><a href="https://openai.com/index/introducing-agentkit/"><u>Agent Kit</u></a> is an integrated development environment for creating autonomous AI workers. It features a visual canvas to design complex workflows, an embeddable chat interface (&quot;Chat Kit&quot;) for deploying agents in any app, and a sophisticated evaluation suite to measure and improve performance.</p><p>A compelling demo from financial operations platform <a href="https://ramp.com/"><u>Ramp</u></a> showed how Agent Kit was used to build a procurement agent. An employee could simply type, &quot;I need five more ChatGPT business seats,&quot; and the agent would parse the request, check it against company expense policies, find vendor details, and prepare a virtual credit card for the purchase — a process that once took weeks now completed in minutes. </p><p>This push into agents is a direct response to a growing enterprise need to move beyond AI as a simple information retrieval tool and toward AI as a productivity engine that automates complex business processes. Brad Lightcap, OpenAI&#x27;s COO, noted that for enterprise adoption, &quot;you needed this kind of shift to more agentic AI that could actually do things for you, versus just respond with text outputs.&quot; </p><h3><b>The future of code and the Jony Ive bBombshell</b></h3><p>Perhaps the most profound shift is occurring in software development itself. <a href="https://openai.com/index/codex-now-generally-available/"><u>Codex</u></a>, OpenAI&#x27;s AI coding agent, has graduated from a research preview to a full-fledged product, now powered by a specialized version of the new GPT-5 model. It is, as one speaker put it, &quot;a teammate that understands your context.&quot; </p><p>The capabilities are staggering. Developers can now assign <a href="https://openai.com/index/codex-now-generally-available/"><u>Codex</u></a> tasks directly from <a href="https://slack.com/"><u>Slack</u></a>, and the agent can autonomously write code, create pull requests, and even review other engineers&#x27; work on <a href="https://github.com/"><u>GitHub</u></a>. A live demo showed Codex taking a simple photo of a whiteboard sketch and turning it into a fully functional, beautifully designed mobile app screen. Another demo showed an app that could &quot;self-evolve,&quot; reprogramming itself in real-time based on a user&#x27;s natural language request. </p><p>But the day&#x27;s biggest surprise came in a closing fireside chat, which was not livestreamed, between <a href="https://openai.com/sam-and-jony/"><u>Altman and Jony Ive</u></a>, the iconic former chief design officer of Apple. The two revealed they have been collaborating for three years on a new family of AI-centric hardware.</p><p>Ive, whose design philosophy shaped the iPhone, iMac, and Apple Watch, said his creative team’s purpose &quot;became clear&quot; with the launch of ChatGPT. He argued that our current relationship with technology is broken and that AI presents an opportunity for a fundamental reset.</p><p>“I think it would be absurd to assume that you could have technology that is this breathtaking, delivered to us through legacy products, products that are decades old,” Ive said. “I see it as a chance to use this most remarkable capability to full-on address a lot of the overwhelm and despair that people feel right now.”</p><p>While details of the devices remain secret, Ive spoke of his motivation in deeply human terms. “We love our species, and we want to be useful. We think that humanity deserves much better than humanity generally is given,” he said. He emphasized the importance of &quot;care&quot; in the design process, stating, &quot;We sense when people have cared... you sense carelessness. You sense when somebody does not care about you, they care about money and schedule.&quot; </p><p>This collaboration confirms that OpenAI&#x27;s ambitions are not confined to the cloud; it is actively exploring the physical interface through which humanity will interact with its powerful new intelligence.</p><h3><b>The Unquenchable Thirst for Compute</b></h3><p>Underpinning this entire platform strategy is a single, overwhelming constraint: the availability of computing power. In both the private press conference and the un-streamed Developer State of the Union, OpenAI’s leadership returned to this theme again and again.</p><p>“The degree to which we are all constrained by compute... Everyone is just so constrained on being able to offer the services at the scale required to get the revenue that at this point, we&#x27;re quite confident we can push it pretty far,” Altman told reporters. He added that even with massive new hardware partnerships with AMD and others, &quot;we&#x27;ll be saying the same thing again. We&#x27;re so convinced... There&#x27;s so much more demand.&quot; </p><p>This explains the company’s aggressive, <a href="https://www.reuters.com/business/autos-transportation/companies-pouring-billions-advance-ai-infrastructure-2025-10-06/"><u>multi-billion-dollar investment in infrastructure</u></a>. When asked about profitability, Altman was candid that the company is in a phase of &quot;investment and growth.&quot; He invoked a famous quote from Walt Disney, paraphrasing, &quot;We make more money so we can make more movies.&quot; For OpenAI, the &quot;movies&quot; are ever-more-powerful AI models.</p><p>Greg Brockman, OpenAI’s President, put the ultimate goal in stark economic terms during the Developer State of the Union. &quot;AI is going to become, probably in the not too distant future, the fundamental driver of economic growth,&quot; he said. &quot;Asking ‘How much compute do you want?’ is a little bit like asking how much workforce do you want? The answer is, you can always get more out of more.&quot; </p><p>As the day concluded and developers mingled at the reception, the scale of OpenAI&#x27;s project came into focus. Fueled by new models like the powerful <a href="https://platform.openai.com/docs/models/gpt-5-pro"><u>GPT-5 Pro</u></a> and the stunning <a href="https://openai.com/index/sora-2/"><u>Sora 2</u></a> video generator, the company is no longer just building AI. It is building the world where AI will live — a world of intelligent apps, autonomous agents, and new physical devices, betting that in the near future, intelligence itself will be the ultimate platform.</p><p>
</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Dev</category>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/5SsXpDqbiyZvm7DJbht3pv/765b6ae01f75c4e84f15e6f9f026bdf2/nuneybits_A_developer_working_on_a_retro_computer_on_gradient_b_a06360d7-aa20-4d82-949f-48730f690988.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[OpenAI announces Apps SDK allowing ChatGPT to launch and run third party apps like Zillow, Canva, Spotify]]></title>
            <link>https://venturebeat.com/technology/openai-announces-apps-sdk-allowing-chatgpt-to-launch-and-run-third-party</link>
            <guid isPermaLink="false">lDeJTwqEEtbx7gBeZZyNF</guid>
            <pubDate>Mon, 06 Oct 2025 18:24:00 GMT</pubDate>
            <description><![CDATA[<p>OpenAI&#x27;s annual conference for third-party developers, DevDay, kicked off with a bang today as co-founder and <b>CEO Sam Altman</b><a href="https://openai.com/index/introducing-apps-in-chatgpt/"><b> announced a new &quot;Apps SDK,&quot;</b></a><b> or software development kit, that makes it &quot;possible to build apps inside of ChatGPT,&quot;</b> including paid apps, which companies can charge users for using OpenAI&#x27;s <a href="https://venturebeat.com/ai/openai-debuts-new-chatgpt-buy-button-and-open-source-agentic-commerce">recently unveiled Agentic Commerce Protocol (ACP)</a>. </p><p>In other words, instead of launching apps one-by-one on your phone, computer, or on the web — now you can do all that <i>without </i>ever leaving ChatGPT. </p><p>This feature allows the user to log-into their accounts on those external apps and bring all their information back into ChatGPT, and use the apps very similarly to how they already do outside of the chatbot, but now with the ability to ask ChatGPT to perform certain actions, analyze content, or go beyond what each app could offer on its own. </p><p>You can direct Canva to make you slides based on a text description, ask Zillow for home listings in a certain area fitting certain requirements, or ask Coursera about a specific lesson&#x27;s content while dit plays on video, all from within ChatGPT — with many other apps also already offering their own connections (see below).</p><p>&quot;This will enable a new generation of apps that are interactive, adaptive and personalized, that you can chat with,&quot; Altman said.</p><p>While the Apps SDK is available today in preview, <a href="https://developers.openai.com/apps-sdk">OpenAI said</a> it would not begin accepting new apps within ChatGPT or allow them to charge users until &quot;later this year.&quot;</p><p>ChatGPT in-line app access is already rolling out to ChatGPT Free, Plus, Go and Pro users — outside of the European Union only for now — with Business, Enterprise, and Education tiers expected to receive access to the apps later this year.</p><h3><b>Built atop common MCP standard</b></h3><p>Built on <a href="https://venturebeat.com/ai/mcp-and-the-innovation-paradox-why-open-standards-will-save-ai-from-itself">the open source standard Model Context Protocol (MCP)</a> introduced by rival Anthropic nearly a year ago, the Apps SDK gives third-party developers working independently or on behalf of enterprises large and small to connect selected data, &quot;trigger actions, and render a fully interactive UI [user interface]&quot; Altman explained during his introductory keynote speech. </p><p>The Apps SDK includes a &quot;talking to apps&quot; feature that allows ChatGPT and the underlying GPT-5 or other &quot;o-series&quot; models piloting it underneath to obtain updated context from the third-party app or service, so the model &quot;always knows about exactly what you&#x27;re user is interacting with,&quot; according to another presenter and OpenAI engineer, Alexi Christakis.</p><p>Developers can build apps that:</p><ul><li><p>appear <b>inline</b> in chat as lightweight cards or carousels</p></li><li><p>expand to <b>fullscreen</b> for immersive tasks like maps, menus, or slides</p></li><li><p>use <b>picture-in-picture</b> for live sessions such as video, games, or quizzes</p></li></ul><p>Each mode is designed to preserve ChatGPT’s minimal, conversational flow while adding interactivity and brand presence.</p><h3><b>Early integrations with Coursera, Canva, Zillow and more...</b></h3><p>Christakis showed off early integrations of<b> external apps built atop the Apps SDK</b>, including ones from e-learning company <b>Coursera</b>, cloud design software company <b>Canva</b>, and real estate listings and agent connections search engine, <b>Zillow</b>.</p><p>Altman also announced Apps SDK integrations with additional partners not demoed officially during the keynote including: <b>Booking.com</b>, <b>Expedia</b>, <b>Figma</b> and <b>Spotify </b>and in documentation, said more upcoming partners are on deck: <b>AllTrails, Peloton, OpenTable, Target, theFork, and Uber</b>, representing lifestyle, commerce, and productivity categories.</p><p>The <b>Coursera demo</b> included an example of how the user onboards to the external app, including a new login screen for the app (Coursera) that appears within the ChatGPT chat interface, activated simply by a text prompt from the user asking: &quot;Coursera can you teach me something about machine learning&quot;?</p><p>Once logged in, the app launched within the chat interface, &quot;in line&quot; and can render anything from the web, including interactive elements like video. </p><p>Christakis explained and showed the Apps SDK also supports &quot;picture-in-picture&quot; and &quot;fullscreen&quot; views, allowing the user to choose how to interact with it.</p><p>When playing a Coursera video that appeared, he showed that it automatically pinned the video to the top of the screen so the user could keep watching it even as they continued to have a back-and-forth dialog in text with ChatGPT in the typical input/output prompts and responses below. </p><p><b>Users can then ask ChatGPT about content appearing in the video without specifying exactly what was said</b>, as the Agents SDK pipes the information on the backend, server-side, from the connected app to the underlying ChatGPT AI model. So &quot;can you explain more about what they&#x27;re saying right now&quot; will automatically surface the relevant portion of the video and provide that to the underlying AI model for it to analyze and respond to through text.</p><p>In another example, Christakis opened an older, existing ChatGPT conversation he&#x27;d had about his siblings&#x27; dog walking business and resumed the conversation by asking another third-party app, <b>Canva</b>, to generate a poster using one of ChatGPT&#x27;s recommended business names, &quot;Walk This Wag,&quot; along with specific guidance about font choice (&quot;sans serif&quot;) and overall coloration and style (&quot;bright and colorful.&quot;)</p><p>Instead of the user manually having to go and add all those specific elements to a Canva template, ChatGPT went and issued the commands and performed the actions on behalf of the user in the background.</p><p>After a few minutes, ChatGPT responded with several poster designs generated directly within the Canva app, but displayed them all in the user&#x27;s ChatGPT chat session where they could see, review, enlarge and provide feedback or ask for adjustments on all of them.</p><p>Christakis then asked for ChatGPT to turn one of the slides into an entire slide deck so the founders of the dog walking business could present it to investors, which did it in the background over several minutes while he presented a final integrated app, <b>Zillow</b>.</p><p>He started a new chat session and asked a simple question: &quot;based on our conversations, what would be a good city to expand the dog walking business.&quot;</p><p>Using ChatGPT&#x27;s optional memory feature, it referenced the dog walk conversation and suggested Pittsburgh, which Christakis used as a chance to type in &quot;Zillow&quot; and &quot;show me some homes for sale there,&quot; which called up an interactive map from Zillow with homes for sale and prices listed and hover-over animations, all in-line within ChatGPT.</p><p>Clicking a specific home also opened a fullscreen view with &quot;most of the Zillow experience,&quot; entirely without leaving ChatGPT, including the ability to request home tours and contact agents and filtering by bedrooms and other qualities like outdoor space. ChatGPT pulls up the requested filtered Zillow search as well as provides a text-based response in-line explaining what it did and why.</p><p>The user can then ask follow-up questions about the specific property — such as &quot;how close is it to a dog park?&quot; — or compare it to other properties, all within ChatGPT.</p><p>It can also use apps in conjunction with its Search function, searching the web to compare the app information (in this case, Zillow) with other sources.</p><h3><b>Safety, privacy, and developer standards</b></h3><p>OpenAI emphasized that <b>apps must comply with strict privacy, safety, and content </b><a href="https://developers.openai.com/apps-sdk/app-developer-guidelines"><b>standards</b></a> to be listed in the ChatGPT directory. Apps must:</p><ul><li><p>serve a <b>clear and valuable purpose</b></p></li><li><p>be <b>predictable and reliable</b> in behavior</p></li><li><p>be <b>safe for general audiences</b>, including teens aged 13–17</p></li><li><p><b>respect user privacy</b> and limit data collection to only what’s necessary</p></li></ul><p>Every app must also include a <b>clear, published privacy policy</b>, obtain user consent before connecting, and identify any actions that modify external data (e.g., posting, sending, uploading).</p><p>Apps violating OpenAI’s usage policies, crashing frequently, or misrepresenting their capabilities may be removed at any time. Developers must submit from <b>verified accounts</b>, provide <b>customer support contacts</b>, and maintain their apps for stability and compliance.</p><p>OpenAI also published <b>developer </b><a href="https://developers.openai.com/apps-sdk/concepts/design-guidelines"><b>design guidelines</b></a>, outlining how apps should look, sound, and behave. They must follow ChatGPT’s visual system — including consistent color palettes, typography, spacing, and iconography — and maintain accessibility standards such as alt text and readable contrast ratios.</p><p>Partners can show brand logos and accent colors but not alter ChatGPT’s core interface or use promotional language. Apps should remain <b>“conversational, intelligent, simple, responsive, and accessible,”</b> according to the documentation.</p><h3><b>A new conversational app ecosystem</b></h3><p>By opening ChatGPT to third-party apps and payments, OpenAI is taking a major step toward transforming ChatGPT from a chatbot into a full-fledged <b>AI operating system</b> — one that combines conversational intelligence, rich interfaces, and embedded commerce.</p><p>For developers, that means direct access to <b>over 800 million ChatGPT users</b>, who can discover apps “at the right time” through natural conversation — whether planning trips, learning, or shopping.</p><p>For users, it means <b>a new generation of apps you can chat with</b> — where a single interface helps you book a flight, design a slide deck, or learn a new skill without ever leaving ChatGPT.</p><p>As OpenAI put it: “This is just the start of apps in ChatGPT, bringing new utility to users and new opportunities for developers.”</p><p>There remain a few big questions, namely: 1. what happens to all the data from those third-party apps as they interface with ChatGPT and its users...does OpenAI get access to it and can it train upon it? 2. What happens to OpenAI&#x27;s once much-hyped<a href="https://venturebeat.com/ai/openai-launches-gpt-store-but-revenue-sharing-is-still-to-come"> GPT Store,</a> which had been in the past promoted as a way for third-party creators and developers to create custom, task-specific versions of ChatGPT and make money on them through a usage-based revenue share model?</p><p>We&#x27;ve asked the company about both issues and will update when we hear back. </p>]]></description>
            <author>carl.franzen@venturebeat.com (Carl Franzen)</author>
            <category>Programming &amp; Development</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/7LhLUUL4BbXIc1jxZwCZdI/6c8bef8ac751c7d0ed1b4f2d8d3b5682/openai-apps-sdk.jpg?w=300&amp;q=30" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Anthropic’s new Claude can code for 30 hours. Think of it as your AI coworker]]></title>
            <link>https://venturebeat.com/technology/anthropics-new-claude-can-code-for-30-hours-think-of-it-as-your-ai-coworker</link>
            <guid isPermaLink="false">1Bn23IpmMMPa8MuxWx1TZZ</guid>
            <pubDate>Mon, 29 Sep 2025 17:00:00 GMT</pubDate>
            <description><![CDATA[<p><a href="https://www.anthropic.com/"><u>Anthropic</u></a> launched <a href="https://claude.ai/"><u>Claude Sonnet 4.5</u></a> on Monday, positioning the artificial intelligence model as &quot;the best coding model in the world&quot; in a direct challenge to OpenAI&#x27;s recently released <a href="https://openai.com/index/introducing-gpt-5/"><u>GPT-5</u></a>, as the two AI giants battle for dominance in the lucrative enterprise software development market.</p><p>The San Francisco-based startup claims its newest model achieves state-of-the-art performance on critical coding benchmarks, scoring 77.2% on <a href="https://www.swebench.com/"><u>SWE-bench Verified</u></a> — a rigorous software engineering evaluation — compared to GPT-5&#x27;s performance. More remarkably, Anthropic says Claude Sonnet 4.5 can maintain focus on complex, multi-step tasks for more than 30 hours, a dramatic leap in AI&#x27;s ability to handle sustained work.</p><p>&quot;Sonnet 4.5 achieves 77.2% on SWE-bench Verified (82% with parallel test-time compute). It is SOTA,&quot; an Anthropic spokesperson told VentureBeat, using industry shorthand for &quot;state of the art.&quot; The company also highlighted the model&#x27;s 50% score on <a href="https://www.tbench.ai/"><u>Terminal-bench</u></a>, another coding benchmark where it claims leadership.</p><p>The announcement follows mounting pressure from OpenAI&#x27;s recent advances and pointed criticism from high-profile figures like Elon Musk, who recently posted on X.com that &quot;<a href="https://x.com/elonmusk/status/1970537297792651492?s=46"><u>winning was never in the set of possible outcomes for Anthropic</u></a>.&quot; When asked about Musk&#x27;s statement, Anthropic declined to comment.</p><p>The release arrives just seven weeks after <a href="https://openai.com/index/introducing-gpt-5/"><u>OpenAI&#x27;s GPT-5 launch in August</u></a>, underscoring the breakneck pace of competition in artificial intelligence as companies race to capture enterprise customers increasingly relying on AI for software development. The timing is particularly noteworthy as Anthropic grapples with questions about its heavy dependence on just two major customers.</p><h2><b>Anthropic dominates coding market despite customer concentration risks</b></h2><p>The competition centers on a market that has emerged as AI&#x27;s first major profitable use case beyond chatbots. <a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/"><u>Anthropic commands 42% of the code generation market</u></a> — more than double OpenAI&#x27;s 21% share — according to a Menlo Ventures survey of 150 enterprise technical leaders. That dominance has translated into remarkable financial performance, with the company reaching a <a href="https://www.anthropic.com/news/anthropic-raises-series-f-at-usd183b-post-money-valuation"><u>$5 billion revenue run rate</u></a> earlier this year.</p><p>However, industry analysis reveals that coding applications <a href="https://cursor.com/"><u>Cursor</u></a> and <a href="https://github.com/features/copilot"><u>GitHub Copilot</u></a> drive approximately <a href="https://www.theinformation.com/articles/anthropics-claude-drives-strong-revenue-growth-while-powering-manus-sensation"><u>$1.4 billion of Anthropic&#x27;s revenue</u></a>, creating a potentially dangerous customer concentration that could leave the company vulnerable if either relationship falters.</p><p>&quot;Our run-rate revenue has grown significantly, even when you exclude these two customers,&quot; the Anthropic spokesperson said, pushing back on concerns about customer concentration. The company provided supportive quotes from both Cursor CEO Michael Truell and GitHub Chief Product Officer Mario Rodriguez praising Claude Sonnet 4.5&#x27;s performance.</p><p>The new model achieves significant advances in computer use capabilities, scoring 61.4% on <a href="https://os-world.github.io/"><u>OSWorld</u></a>, a benchmark that tests AI models on real-world computer tasks. Just four months ago, Claude Sonnet 4 held the lead at 42.2%, demonstrating rapid improvement in AI&#x27;s ability to interact with software interfaces.</p><h2><b>OpenAI&#x27;s aggressive pricing strategy threatens Anthropic&#x27;s premium positioning</b></h2><p>Anthropic&#x27;s announcement comes as the company grapples with competitive pressure from GPT-5&#x27;s aggressive pricing strategy. Early analysis shows <a href="https://simonwillison.net/2025/Aug/7/gpt-5/"><u>Claude Opus 4 costing roughly seven times more</u></a> per million tokens than GPT-5 for certain tasks, creating immediate pressure on Anthropic&#x27;s premium positioning.</p><p>The pricing disparity signals a fundamental shift in competitive dynamics that could force enterprise procurement teams to reconsider vendor relationships previously built on performance rather than price. Companies managing exponentially growing AI budgets now face comparable capability at a fraction of the cost.</p><p>Yet Anthropic is maintaining its pricing strategy with Claude Sonnet 4.5. &quot;Sonnet 4.5&#x27;s cost remains the same as Sonnet 4,&quot; the spokesperson confirmed, keeping prices at $3 per million input tokens and $15 per million output tokens.</p><h2><b>Claude Sonnet 4.5 delivers 30-hour autonomous work sessions and enhanced security</b></h2><p>Beyond performance improvements, Anthropic positions <a href="https://claude.com/product/overview"><u>Claude Sonnet 4.5</u></a> as its &quot;most aligned frontier model yet,&quot; showing significant reductions in concerning behaviors like sycophancy, deception, and power-seeking tendencies. The company has made &quot;considerable progress on defending against prompt injection attacks,&quot; a critical security concern for enterprise deployments.</p><p>The model is being released under <a href="https://www.anthropic.com/news/activating-asl3-protections"><u>Anthropic&#x27;s AI Safety Level 3 (ASL-3) protections</u></a>, which include classifiers designed to detect potentially dangerous inputs and outputs related to chemical, biological, radiological, and nuclear weapons. While these safeguards sometimes flag normal content, Anthropic says it has reduced false positives by a factor of ten since initially describing them.</p><p>Perhaps most significantly for developers, Anthropic is releasing the <a href="https://docs.claude.com/en/home"><u>Claude Agent SDK</u></a> — the same infrastructure that powers its Claude Code product. &quot;We built Claude Code because the tool we needed didn&#x27;t exist yet,&quot; the company said in its announcement. &quot;The Agent SDK gives you the same foundation to build something just as capable for whatever problem you&#x27;re solving.&quot;</p><h2><b>International expansion accelerates as $1.5 billion copyright settlement finalizes</b></h2><p>The model launch coincides with Anthropic&#x27;s aggressive international expansion, as the company seeks to diversify beyond its U.S.-concentrated customer base. The startup recently announced plans to <a href="https://www.reuters.com/business/world-at-work/anthropic-triple-international-workforce-ai-models-drive-growth-outside-us-2025-09-26/"><u>triple its international workforce</u></a> and <a href="https://www.cnbc.com/2025/09/26/anthropic-global-ai-hiring-spree.html"><u>expand its applied AI team fivefold</u></a> in 2025, driven by data showing that nearly 80% of Claude usage now comes from outside the United States.</p><p>However, the expansion comes amid significant legal costs. Anthropic recently agreed to pay <a href="https://apnews.com/article/anthropic-authors-copyright-judge-artificial-intelligence-9643064e847a5e88ef6ee8b620b3a44c"><u>$1.5 billion in a copyright settlement with authors and publishers</u></a> over allegations the company illegally used their books to train AI models without permission. The settlement, approved by a federal judge last week, requires payments of $3,000 for each publication listed in the case.</p><h2><b>Enterprise AI spending doubles as companies prioritize performance over cost</b></h2><p>The rapid-fire model releases from both companies reflect the high stakes in enterprise AI adoption. Model API spending has <a href="https://menlovc.com/perspective/2025-mid-year-llm-market-update/"><u>more than doubled to $8.4 billion</u></a> in just six months, according to Menlo Ventures, as enterprises shift from experimental projects to production deployments.</p><p>Customer behavior patterns suggest enterprises consistently prioritize performance over price, upgrading to the newest models within weeks of release regardless of cost. This behavior could work in Anthropic&#x27;s favor if Claude Sonnet 4.5&#x27;s performance advantages prove compelling enough to overcome GPT-5&#x27;s pricing advantage.</p><p>However, the dramatic price differential introduced by GPT-5 could overcome typical switching inertia, especially for cost-conscious enterprises facing budget pressures. Industry observers note that model switching costs remain relatively low, with <a href="https://venturebeat.com/ai/anthropic-revenue-tied-to-two-customers-as-ai-pricing-war-threatens-margins"><u>66% of enterprises upgrading within existing providers</u></a> rather than switching vendors.</p><p>For enterprises, the intensifying competition delivers better performance and lower costs through continuously improving capabilities. The rapid pace of model improvements — with new versions launching monthly rather than annually — provides organizations with expanding AI capabilities while vendors compete aggressively for their business.</p><p>While the corporate rivalry between Anthropic and OpenAI dominates industry headlines, the real economic impact extends far beyond Silicon Valley boardrooms. The development of AI systems capable of sustained coding work for 30 hours represents a fundamental shift in how software gets built, with implications that extend across every industry relying on technology infrastructure.</p><p>These advancing capabilities signal broader workplace transformation ahead. As AI systems demonstrate increasing proficiency at complex, sustained intellectual work, the technology industry&#x27;s competition for coding supremacy foreshadows similar disruptions across fields requiring analytical thinking, problem-solving, and technical expertise.</p>]]></description>
            <author>michael.nunez@venturebeat.com (Michael Nuñez)</author>
            <category>Automation</category>
            <category>Dev</category>
            <category>Programming &amp; Development</category>
            <category>Software</category>
            <category>Technology</category>
            <enclosure url="https://images.ctfassets.net/jdtwqhzvc2n1/qyB4FkFDwzcJMFwLkpg52/2fa0335d017169fc9296f756bc924835/nuneybits_Vector_art_of_brain_with_circuit_pathways_in_burnt_or_af313220-841a-4fca-a27c-072866d0243d.webp?w=300&amp;q=30" length="0" type="image/webp"/>
        </item>
    </channel>
</rss>