Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Learn more.

Visual effects rendering is a computationally expensive undertaking. In order to bring James Cameron’s Avatar to life, for instance, Weta Digital had to process up to eight gigabytes of data per second a day for over a month in an onsite 10,000-square-foot server farm. The machines inside said farm were as beastly as they come: Collectively, they comprised 40,000 processors and 104 terabytes of memory.

Google and Sony think there’s a better way to produce effects — one that involves the cloud. Toward that end, the companies today jointly announced OpenCue, an open source, high-performance render manager capable of scaling from thousands to millions of shots in hybrid cloud environments.

“As content production continues to accelerate across the globe, visual effects studios are increasingly turning towards the cloud to keep up with demand for high-quality content,” Todd Prives, product manager at Google Cloud, wrote in a blog post. “While on-premise render farms are still in heavy use, the scalability and the security that the cloud offers provides studios with the tools needed to adapt to today’s fast-paced, global production schedules … Sony’s strong history of developing software tools has made this an ideal partnership.”

OpenCue is an evolution of Sony Pictures Imageworks’ internal queuing system, Cue 3, which itself is the culmination of 15 years of in-house development. In recent visual effects and animation projects, it’s scaled to over 150,000 cores between Sony’s on-premises data center and Google Cloud Platform (GCP), Google’s suite of cloud computing services. Sony says it’s been used on hundreds of films.


Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

OpenCue’s architecture can support many multiple concurrent machines, and it has tagging systems that allow users to allocate specific jobs to specific machine types. Jobs are processed on a central render farm, freeing up visual effects artists’ workstations for other tasks. And hosts can be split into a “large number” of processes, each with their own reserved core and memory requirements.

“We hope that can help all studios to better take advantage of the scale and power of GCP and are looking forward to seeing what amazing visual content users will create next,” Prives said.

As of today, OpenCue’s source code — along with executables and documentation — are available on GitHub, with tutorials and sample projects forthcoming.

It isn’t GCP’s first foray into the visual effects industry. Back in 2014, the Mountain View company acquired Zync Render, a service that facilitated cloud-based effects rendering. (It rendered scenes in Star Trek: Into Darkness and Looper, among a dozen other feature films and commercials.) And in 2015, Google launched a Los Angeles cloud region for GCP and rolled out Google Cloud Filestore, a managed network attached storage (NAS) service for apps — such as rendering software — that require a file system interface and a shared file system for data.

More recently, Google acquired Anvato, bringing a fully managed, end-to-end video processing and platform for live and video-on-demand editing, video analytics, and more to GCP.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.