When you see an animated movie like How to Train Your Dragon 2, you may not think about the technology behind its special effects. But the creators and artists responsible for the movie toiled on it for more than five years to deliver flawless animations that are so lifelike that they don’t catch your attention. You don’t notice small bugs. You just marvel at the realistic water or the emotion in the faces or the shimmering of Hiccup’s leather armor.
But the tech behind the film represents the “absolute pinnacle” of technology and creative media, according to DreamWorks. To make the movie, DreamWorks Animation had to remake its computing infrastructure and create new technologies like Apollo, the platform for making the film, and Premo, a tool that artists could use to build images in real-time. These tools make the artists behind the animations much more efficient, tapping both the infrastructure of multicore computers and cloud computing.
Lincoln Wallen, chief technology officer at DreamWorks Animation, and Pete Baker, vice president of software and services at Intel, talked with VentureBeat about the tech behind the movie and the whole foundation of computing infrastructure that allows the company to work on ten movies at the same time. Those movies lead to the creation of more than 5 billion digital files. Here’s an edited transcript of a section of an interview with Wallen and Baker.
Above: DreamWorks Animation CTO Lincoln Wallen
Image Credit: Dean Takahashi
VentureBeat: What’s the tech behind the movie?
Lincoln Wallen: What you’re seeing here is the absolute pinnacle of creative media cultural product. It’s the top of the stack there. In our movies, we want and aspire to have the top of the deck. Dean’s produced a fantastic movie. I know you saw a little bit today. I hope you’ve seen it. If not, absolutely see the rest of it.
How to make How to Train Your Dragon 2
We went deep inside DreamWorks to find out how it used cutting-edge enterprise and animation tech to make this summer’s blockbuster animation.
We’ve got creative and Hollywood here. We have, again, the archetype of Silicon Valley and the core disruptor for the last century and heading into the next, which is silicon. Moore’s Law has been a key transformer of every business and every consumer product in my lifetime. To have Intel as a substantial part of this achievement is natural.
On the other hand, you’ve seen the relationship with the creator, but we also have a software aspect here. Pete leads the software and software services group at Intel. I sit here in the middle as the sort of CTO, CIO, chief disruption officer that’s trying to bring about radical change within our business. These are the elements. This is the customer and these are the enabling pieces. Software is the thing that makes the difference and makes the silicon shine. Intel has recognized that with the amazing resources and effort put into software tools, software libraries, and software resources.
VB: What’s the investment here?
Wallen: What we’ve done is decided to invest in what I hope you’re now getting a glimpse of, which is a radical new way of putting these elements together. Dreamworks has always been somewhat unique, somewhat at the cutting edge. Kate made a reference to the fact that more than 10 years ago, the company made a decision to own and manage its own production platform. Even at that time, that’s a significant decision, to be proprietary and to invest in the engineering resources.
To give you a different perspective on that, 10 to 15 percent of my technology resources are what you would call IT. 85 to 90 percent are what we call AT, or Animation Technology, which is about the delivery of business value, not simply the operation of the enterprise. That’s allowed us to manage a platform and take ourselves to the cloud in the first wave, very early on, working with companies like HP and Red Hat to build key elements of what we today call cloud computing. We were already in a place where we had a sense of one element of the compute continuum, which is scalable data centers and infrastructure-as-service compute platforms.
The other piece was still challenging. It was still single-core, generally, starting to become multi-core. We knew that, to respond to that, we needed to put these two things together in a seamless way and not recognize the boundary between client and data center. We had to architect a platform that allowed us to move data and compute load across the two. That’s where the partnership with Intel was critical, because they know their silicon best, but also getting the best out of that software. Now we’re talking about threaded compute down to the few microseconds, being able to measure, schedule, and allocate resources at that level. We took a view that the client architecture, the IA architecture, was a mini-data center.
We took the cloud computing model and turned to focus on the client side. That’s one of the reasons why, working with some substantial enabling software from Intel, we were able to create a cloud model on the client, on the multi-core system. That’s why, when we add more cores to the workstations, the software just goes faster or can do more. It’s now a highly efficient and effective distributed compute platform in its own right. Completely seamless transition from that into a wider data center. We can put these processes together at will in order to create a different type of user experience, some of which you’ve seen and some of which we’ve talked about as we go forward.
Above: How to Train Your Dragon 2 Alpha character
Image Credit: DreamWorks Animation
VB: How does the tech affect the artists?
Wallen: The impact of that on the artists was that we were able to go into design processes, as we mentioned, that didn’t start from, “What can we do?” but more, “What do you want?” They gave us incredibly pure and clear ideas about how business processes should be organized, if they took off all the constraints. You think about that process taking place in many other businesses that take a step toward being first of all digital, second of all cloud, and third of all using the compute continuum in this way. Then you can see the enormous changes that this could bring on already-available infrastructure, software, and component pieces. It’s about putting them together in the right way.
What I take away from the movie is the courageousness of the camera and the acting. The animators chose to have sequences and action in the movie — not just fighting action, but subtle, emotional action — that animators would cringe at doing, whether it’s the closeness of expression or the emotional points. One scene I love is with one character mimicking another. It’s incredible, the aliveness about those scenes. The ability to explore was one of the key things the animators got back.
On the enterprise side, we’re able to sit back and move resources around, apply compute at exactly the place where it matters most. The combination leads to better movies done more efficiently with more flexibility, more agility, and ultimately lower cost. It’s an amazing commercial, artistic, and enterprise achievement. Kudos to Intel for recognizing the opportunity and partnering so close with us.
It’s also enabling some elements of this with very key libraries that either manage threading or manage scheduling – engineering resources that went down to the chipset level so that we could optimize or vectorize multi-core processing. All of these were necessary to pull it off.
VentureBeat: Are those libraries specific to animation, or can they be applied anywhere?
Wallen: Anywhere. One of the most interesting things about this is that almost all of these applications, right until you put the user work flow on top of it, they’re highly scaled compute models for doing whatever you like. Putting simulation in there, putting financial calculations, all those sorts of things are computable. Premo is essentially an ensemble compute engine. If you know about how weather prediction or large-scale scientific computation is done, that’s what’s going on there from an architectural point of view.
The libraries are very generic. They’re either scheduling or threading libraries, at the low level. On top of that is an architecture that knows how to put this together as if it were a large-scale database.
Above: Intel’s software vice president Pete Baker
Image Credit: Dean Takahashi
Pete Baker: I’m a vice president in the software and services group. I was thinking while I came down on the plane from Portland, hearkening back about six years ago. On paper this is a really curious marriage. We had a collection of silicon chip folks and artists and storytellers. How do we come together in a shared opportunity and get something out of it?
In reality, Intel has a host of software engineers. We have thousands of software engineers. There’s the traditional folks who write BIOS drivers and firmware, but the group I work in also has the privilege of dealing with third parties to make their software better. That could be defined as faster, or as taking advantage of new capabilities, utilizing our tools to do so. We have some of the world’s foremost algorithmic and optimization experts. That makes sense. We know the silicon intimately. That knowledge allows us to convey insights into software, be it our own or others’, that is unique in the industry.
We fast-forward about six years. As I’m reflecting back, I said, “This was a little more than curious. It was fascinating.” We thought we went into a partnership with a collection of artists and animators and storytellers. They’re actually quite a technology company as well. They’re so enthusiastic about the technology and how to use it to convey those stories, those emotional things that people can see, that jump off the screen. We have that shared enthusiasm for technology and bringing stories to life.
The realities are that we could bring to bear, of course, the benefits of the performance and capabilities of our silicon roadmap. That’s table stakes. Beyond that, we also have a host of software tuning, optimization, and creation tools that we’ve been able to bring to bear against the problem, as well as these software and tuning experts. We’ve been working hand in hand now for five-plus years — designing the software, optimizing the software, making sure that the work flows are clean and useful and work best on our silicon. It’s so gratifying to see the fruits of that labor on the screen.