Mark Cerny, the architect of Sony’s upcoming PlayStation 4 video game system, said that PC architecture had finally grown up enough to become the foundation of a sophisticated home console.
In a speech at the Gamelab conference in Barcelona, Spain, Cerny said that the Japanese company had learned that the PlayStation 3 hardware based on the Cell microprocessor was just too complex for game designers. They eventually mastered the technology, but in the early days of the PS3, not enough good games exploited the processor. The slow acceptance of the PS3 led to the eventual departure of Ken Kutaragi, the father of the PlayStation business. Cerny stepped into his shoes as the architect of the PS4.
The PlayStation 3 launched in 2006, and Sony conducted a postmortem in 2007. That process was more collaborative than the initial design of the Cell, which was largely done in secret.
“The obvious path [for the PS4] was to use Cell,” Cerny said. Once developers mastered Cell’s many subprocessors, or cores, they could work magic. But the team decided to look at options with central processing units (CPUs) and graphics processing units (GPUs).
The conventional wisdom was that x86 (the PC architecture based on Intel’s chips) was “unusable in a game console,” Cerny said. “PowerPC was a straightforward architecture.” During the Thanksgiving holiday in 2007, Cerny researched the whole history of x86. He found that PCs from Advanced Micro Devices and Intel could finally be used in a game console. He didn’t spell it out, but Cerny probably meant that the prospect for combination chips had finally arrived. In the traditional PC, the CPU and the GPU were separate chips. But AMD had purchased graphics chip maker ATI Technologies in 2006 for $5.4 billion, and it was in the process of designing chips that contained both the CPU and the GPU. This cut the cost of the chips — the most expensive components in a game console — in half. But this design usually sacrificed performance.
AMD, however, was working on creating CPU/GPU combinations that could take advantage of being on the same chip. Spurred by the competition, Intel did the same thing. By 2011, both companies were able to introduce combination chips for the PC. By believing in these combos, Cerny had made a good guess about why x86 would work in a game console. In doing this research, Cerny decided he wanted to have a bigger role in the PS4’s design. He successfully made a pitch to become the architect of the machine. In early 2008, the design of the PS4 began in earnest.
Cerny focused on a more collaborative approach. “We started frank conversations with the game team,” he said. He created a questionnaire for the developers outside of the company. It asked them what they wanted to see in a next-generation console. The questions asked what type of CPU, GPU, and other details they wanted. The goal was to create something that would be 1,000 times more powerful than a PS3.
“They were not fooled for a minute by the abstract nature of the questionnaire” and correctly saw that Sony was seeking their opinions for the PS4. Cerny talked to 30 teams, and he received enlightening answers. They were not what he expected.
“They wanted a system with a unified memory,” he stated. This meant that devs wanted the PS4 to have just one pool of memory, not two. PCs and earlier game consoles used two different kinds of memory to feed data to the CPU and graphics. But unified memory would be easier to program. The proper number of CPU cores would be four or eight. Sony eventually chose eight.
Drawing a lesson from the PS3, “they didn’t want exotic,” Cerny said. “If there was a GPU that could do real-time ray-tracing [a sophisticated technique used in ultra-realistic graphics], they didn’t want it for the PS4.” Ray-tracing would have been fascintating, Cerny said, but it would have forced game developers to throw out all they had learned from the last generation of graphics.
Cerny liked those answers. He wanted an architecture that would be easy for developers to use early in the console’s life cycle but sophisticated enough for them to further exploit later on.
The PS4 uses a 256-bit bus and a type of memory, GDDR5, that the fastest graphics cards utilize. The combination of the bus (which is wider than a 128-bit bus) and the faster memory results in a pipeline that resembles a rushing stream. It can send data through the chip at a rate of 176 gigabytes per second.
That’s fast, but it wasn’t the only option. Sony also had a choice of using a 128-bit bus with a more complicated memory structure. That memory could send data even faster and then store some of it inside a smaller eDRAM memory that was on the chip. The result could be a system with 1,088 gigabytes per second of bandwidth, or more than a terabyte of bandwidth. That seemed obviously faster than the 176-gigabytes-per-second option. But Cerny said the on-chip memory would be very complicated to manage. So his team went with the simpler approach. Microsoft, by contrast, chose this latter path for the Xbox One. And on day one of the PS4 launch, Sony’s developers should already know how to exploit the architecture.
By switching to the PC technology, Sony could gain huge benefits because developers would have tools immediately available for making titles. They could start working on the new game designs almost immediately and get to playable prototypes much faster than they could for the PS3. The result would be better, cheaper, and more timely launch releases.
But Cerny said that the graphics chip also took in advances that had emerged on the PC side. Using the ability to program the chip for non-graphics tasks, Sony saw a way to give developers a way to make their experiences more sophisticated over time. The non-graphics tasks that devs can program on the graphics chip will lead to richer titles over time, Cerny said. Those capabilities include decompression, physics, raycasting for audio, collision detection, and world simulation. The graphics chip could handle tasks that otherwise couldn’t be done in the machine, and that’s the path for improvement over time.
The choices that Sony made were far different from Microsoft’s. For the Xbox One, Microsoft invested heavily in the Kinect motion-sensor technology, which accounts for a large part of the cost. Sony did not include such a sensor in its base unit. That allowed the company to price its system at $399 while Microsoft will sell the One for $499. The Xbox maker is using the same x86 vendor, AMD, as Sony. But the Xbox One has a less-powerful CPU/GPU combination, analysts say.
Microsoft has an even more radically different technology — what it calls “cloud processing.” That means that the Xbox One will reach outside of the console to get more processing power from Internet-connected data centers — the cloud — to handle game-processing tasks. Sony isn’t using that approach, and it might have a tough time arguing that its console is better. After all, one of the highest-rated games of the 2013 Electronic Entertainment Expo was Titanfall, a Microsoft exclusive coming from Respawn Entertainment and Electronic Arts for the Xbox One.
Who made the right choices? We’ll find out when the games come out.
Here’s our other stories on Cerny’s speech:
Here’s a video of Cerny’s speech.