David Chen

Created by VentureBeat using ChatGPT

Beyond GPT architecture: Why Google's Diffusion approach could reshape LLM deployment

Last month, along with a comprehensive suite of new AI tools and innovations, Google DeepMind unveiled Gemini Diffusion. This experimental research model uses a diffusion-based approach to generate text. Traditionally, large language models (LLMs) like GPT and Gemini itself have relied on autoregression, a step-by-step approach where each word is generated based on the previous one. Diffusion language models (DLMs), also known as diffusion-based large language models (dLLMs), leverage a method more commonly seen in image generation, starting with random noise and gradually refining it into a coherent output. This approach dramatically increases generation speed and can improve coherency and consistency. 

David Chen
Image generated by VentureBeat using DALL-E.

From OAuth bottleneck to AI acceleration: How CIAM solutions are removing the top integration barrier in enterprise AI agent deployment

With their ability to interact intelligently with external applications, AI agents are poised to become an integral part of modern enterprise workflows. No longer siloed from the outside world, AI agents promise to handle tasks that traditionally required human intervention, enabling repetitive and high-volume tasks to be automated. Example use cases for agentic automation might include:

David Chen