SAN JOSE, Calif.--(BUSINESS WIRE)--April 1, 2026--

At GDC and NVIDIA GTC, LinearGame's Yoroll did something the AI game sector has largely struggled to do: turn generated video into a structured, playable experience.

At this year's Game Developers Conference and NVIDIA GTC, LinearGame presented Yoroll through a series of product demonstrations, focusing on playable implementations rather than research benchmarks or standalone model outputs. What separated the Yoroll demonstrations from typical AI game previews was not visual quality alone — it was the degree to which the pieces held together as a playable system.

Yoroll, developed by Singapore- and San Francisco-based LinearGame, combined generated video with interactive systems, including branching logic and persistent state, to support structured gameplay experiences.

LinearGame showcased three titles with distinct creative directions, all built on the same underlying platform. Star Junkers combines cinematic interactive storytelling with space exploration, featuring an effectively limitless universe of planets and standalone storylines. Its real-time world-model interaction is designed to support open-ended gameplay. Dead Reckoning: Reborn is set in a post-apocalyptic environment, incorporating combat, resource gathering, shelter building, and narrative decision-making. The Occult Album focuses on mystery and exploration, where players use a camera mechanic tied to supernatural themes to investigate clues, solve puzzles, and uncover elements of an overarching narrative.

The three titles share a common foundation but point in different creative directions — and together, they reflect a broader shift within AI gaming toward systems capable of supporting coherent, replayable, and product-oriented experiences. Across the industry, attention has increasingly moved beyond isolated generated content and toward integrated gameplay systems.

According to LinearGame, Yoroll connects multiple system layers, including video generation models, interaction tracking, state management, and branching structures. These components are designed to maintain continuity and enable structured interactivity within video-based environments. The platform also incorporates world modeling systems intended to support more open-ended exploration.

"In this framework, video is not treated as passive or pre-rendered content," said Heath X, CEO of LinearGame. "It functions as part of the interactive runtime itself — and that changes what a game actually is."

That framing has implications beyond technology. LinearGame argues that if video-native games reach a viable scale, the next wave of game creators may not be defined solely by engine expertise. Narrative designers, cinematographers, and people trained in interactive storytelling could find a more direct path into game production — workflows that have historically required deep technical overhead.

LinearGame stated that improvements in video quality, reduced inference costs, and lower latency have contributed to the feasibility of these systems. The company is currently focusing on cinematic, interactive formats that align with the capabilities of video generation models and existing content distribution channels.

Whether or not the video-native game format achieves the scale its proponents expect, Yoroll's GDC and GTC presence marked one of the clearer signals yet that the category is moving from concept toward product.

https://yoroll.ai/

About LinearGame

LinearGame is an AI-native game company based in Singapore and San Francisco. Its flagship platform, Yoroll, focuses on “video-native games.” By connecting generative video, interactive systems, state management, and branching logic, LinearGame aims to turn video from something that is simply watched into something that can be played.

CEO
Heath Xiao
Lineargame
team@lineargame.ai