Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
The economics of the chip industry are pretty staggering. Sanjay Jha, CEO of contract chip manufacturer Globalfoundries, recently told me that it could cost between $10 billion and $12 billion to build a next-generation chip factory based on the latest technology, dubbed 7-nanometer production. And one for the generation after that, dubbed 5-nanometer production, could cost $14 billion to $18 billion.
There are only a few companies in the world that can afford to spend that much money on a chip factory. And they can do it because those chips are expected to generate billions of dollars in revenue over the life of the factory.
Intel, the world’s biggest chip maker, can certainly afford it. Globalfoundries is one of the companies that enables all of the other chip design companies to keep up. Globalfoundries’ customers, such as Advanced Micro Devices, design chips — like Ryzen desktop processors — that power the modern electronics world. And Globalfoundries handles the gargantuan technical task of building the factories that fabricate those chips with precision equipment. As a foundry, the company offloads the manufacturing so its customers can focus on design.
As a contract manufacturer, Globalfoundries creates opportunities for companies that want to provide alternatives to Intel and others in the chip business. Globalfoundries was once part of AMD. It recapitalized and spun out of AMD as an independent contract manufacturer in 2009, and it acquired both Chartered Semiconductor and IBM Microelectronics. Globalfoundries still has a tight relationship with AMD as a maker of chips based on AMD’s chip designs. It is owned by the Mubadala Investment Company. I spoke with Jha about the state of the global chip manufacturing business.
Here’s an edited transcript of our interview.
Sanjay Jha: The newsworthy part, mildly news, is we’re announcing the 12nm FinFET technology. As you know, all AMD products — Radeon, Ryzen, Epyc — have been done by us. They want a faster process, a little scaling of the process. To compete with Nvidia using 12nm from TSMC, we’re offering them a 12nm process to continue to deliver faster clock speeds and performance.
VentureBeat: Where is that at this point?
Jha: I don’t know when they’ll have products for us, but they’re designing to it already. Our factories are ready to deliver those things.
VB: The discussion I always get it into at this point is that nobody agrees what really is 12nm or 10nm and who’s really ahead.
Jha: I’ll step into that discussion. Basically, the numbers don’t mean much these days. I think Samsung has talked about 10nm, 11nm, 14nm, 8nm, 7nm, 6nm. I don’t know what they mean. The way to think about 12nm is it has higher performance and more scale than 14nm. It’s not quite the scaling or performance of 10nm. Performance may be very close to 10nm, though.
What has happened, as the line widths get closer, it’s getting harder and harder to get incremental performance. You can get scale that you want, but getting performance is harder. You can get some power consumption reduction, as well. With 14nm, most people use .8. At 10nm, most people are using .7. As you go, there’s a clear scaling with the ratio of the squares of those two numbers. That gives you about 20-25 percent reduction in power consumption. So we deliver performance, some power consumption reduction, and scaling.
Exactly what they mean, though — if you go from 7nm to 5nm, looking at the square of the two numbers, the ratio is close to half. That’s long since gone. What you’re seeing more nowadays is people optimizing their technology for particular applications. We have a technology called 12 FDSOI. That turns out to be a very interesting technology because, first of all, it’s a planar transistor, not a FinFET transistor. Therefore, the complexity of the process is lower. It tends to have lower leakage current. Because it has higher performance, it tends to have lower dynamic power for any performance. What you do, to get any performance, you scale your voltage down a bit and get the power savings — 12 FDSOI is very interesting, and it’s getting a lot of traction.
One other reason why it’s getting traction is because FDSOI turns out to be very good for integration with analog and RF. As you know, more of the edge devices are now wirelessly connected than ever before. You have Bluetooth or Wi-Fi or full WAN radio integrated. If you want to do that, FDSOI turns out to be a very good technology. That’s part of the reason why we conceived of the FDSOI series — battery-powered, connected, cost-sensitive devices.
IOT falls in that category. Automotive falls in that category. Increasingly, I think you’ll see 5G fall in that category. At the same time, one reason we went from 22nm to 12nm is because of edge AI you need more transistors and more circuit. We scaled the technology to deliver more real-time edge decision-making in 12nm.
VB: As far as how quickly the manufacturing could scale up when you have these new AMD parts coming in — their financials didn’t seem to reflect what you would think would happen when you have a superior product on the market. It seemed very gradual to me. It seemed like multiple quarters might be the time frame.
Jha: Certainly two or three quarters. That’s generally the case in semiconductor. The only place where you see incredible ramps these days is in mobile. The volume is so large that people dedicate whole fabs to ramp the technology. In the old days, ramps always happened slowly. There’s a very big Christmas effect in mobile. You announce a part in August-September, launch a product in late September, and have a quarter worth of shipping products for the Christmas season. Mobile has been unique in that.
VB: It’s possible that someone could introduce something and you might not see big effects until a year later?
Jha: A year later would be long. But remember, in mobile particularly Apple controls the hardware, software, and apps ecosystem. In PC, generally you have to think about when the chips become available, when the APIs become available, when the games become available, when the games become optimized. In PCs, the dads and grads and back to school are the two big seasons. There’s a little Christmas effect, as well, a little more diffuse than in mobile. In mobile, the fourth quarter is always the big one. But there are different dynamics as to how quickly things ramp.
VB: For you guys, it’s not necessarily learning how to do this. It’s more how the demand comes in?
Jha: Right. We’re in full ramp. There’s a bit of learning in every part you do. There’s a little learning from our point of view to understand what the yield sensitivities are from AMD’s point of view, and understanding if they’ve debugged every last thing. There are often minor ramps of parts, either for optimization, power reduction, or bug fixes. That happens, and it does play a role in the ramp. But not a big one. Occasionally you see those things.
VB: Going back to where the specs are on nanometers, how do customers view the problem of figuring out who’s really ahead of the game as far as manufacturing?
Jha: They look at four things. They look at density, performance, power consumption, and cost. We call it PPAC. That’s what most customers care about. They don’t care about 12nm or 10nm. Even if the density of 12nm is a little lower than 10nm, if the complexity of the process is lower and the cost is lower and the power consumption may be lower, that may allow them to go after the mobile space a little better than 10nm. They look at the PPAC and target it to particular models.
VB: Are you tailored to a customer like AMD, or do you feel like your factories are fairly general-purpose?
Jha: I’d say we work very closely with AMD now, to the point that we definitely do things for AMD which make a big difference to their product. Let’s say somebody has done a CPU core at AMD in 14nm, and they want us to improve the performance. What they want is to not have to redesign that core from scratch. That’s very expensive. They’re working on 7nm already and they only have so many resources. They want incremental improvements, where you get more performance, without having to redesign the entire product.
That’s generally the request we get from customers. We’re very keen to do that ourselves because we don’t want to create new standards or libraries. We have probably 100 partners who will generate IP on our platform. We don’t want to move the technology so much that they have to completely redesign. It’s one thing to re-characterize it. It’s another to redesign. We do optimize a lot for our customers, but within certain constraints.
VB: As far as where you see the chip industry now, what are the overall characteristics of the market like?
Jha: There are two things. One is that there is scaling. There’s 14nm, 12nm that we’re announcing, and then 7nm. We’ll have 7nm available early to middle of next year. We’re well on the path of scaling. 7nm is probably one of the most dramatic changes in PPAC that we’re going to see for a while.
The second part, much more interesting, is that more of the systems are becoming heterogeneous. If you look at the Ryzen chip, what’s happening is the cost of 7nm per square millimeter is much higher than the cost would have been at 14nm. These chips have areas where they absolutely need the benefits of 7nm, but also areas with no need for 7nm. For instance, most CPUs have a memory interface, a display interface, serial links like USB. Those things don’t need to be in 7nm, and they’re becoming an increasing part of the circuit.
People are separating those, leaving them in 14nm and then putting a CPU chip in and putting links to it. There are some performance and architectural tradeoffs that have to be done very carefully, but people are making package choices. Moore’s Law is an economic law, not a physical law. It’s becoming less economic to scale everything in the way that used to be the case earlier. Cost is not coming down at the same rate. Power is not coming down at the same rate. Performance is not going up at the same rate. You can get the scale, but the cost of getting scaling is becoming higher. You have to double-pattern, triple- and quad-pattern lines to get the scale.
Moore’s Law is definitely slowing down, but people are innovating within that constraint. Another thing that’s going on, something probably even bigger, is that more things are getting connected. Wireless integration is becoming even more important. Not only that, but more things are battery-powered, so power consumption is getting more important. PCs used to have 30kw/h of battery. Cell phones have 2-3kw/h battery. IOT devices have 300mw/h battery. It’s a reduction of a factor of 10 in the amount of battery you have available.
Two other things go with that. One, PCs used to run for eight or nine hours on a good day. Even today, it’s hard to get nine hours out of a laptop. Cell phones have to last 24 hours. IOT devices have to last months, if not years. Two, volume has gone down. Your volume is probably more than an order of magnitude of change from a PC to a phone. You go from a phone to an IOT device, there’s more than an order of magnitude.
This is driving technology development on a different vector. That’s not scaling. That’s looking at power. By the way, at ASP the processor in a laptop is maybe $100. The one in this phone is $15. It’ll be a dollar or two in an IOT device. We’re focusing more on cost-effective, power-effective, and connected devices. We believe the growth rate there is much higher.
That’s where you’ve seen our 22 FDX and 12 FDX technologies. They’re planar technologies, not FinFET technologies. They’re much less complex, and the cost of doing the development is much lower. If you had to spend $100 million to develop an IOT device, you could never justify it. What we’re doing is at much lower cost. A set cost for an IOT device has to be dramatically lower. FinFET is always going to be more expensive. You’re not going to see a lot of IOT devices on FinFET.
You’ll see the scaling vector, and that scaling vector is becoming a little slower, because cost is going up and people are applying packaging — they’re stacking memory, for instance, heterogeneous integration of technologies. Then you’re seeing this cost-sensitive connected technology development. We have to support both. 22nm is the last technology you should think about as single-pattern planar technology. 12nm, in my view, is the last optical, cost-effective dual-pattern technology. After that, what 7nm that is initially optical will go to UV in the long run. You’re seeing these discontinuous switches in technology, and you have to think about the architectures and how they pair up with those.
VB: How much do chip factories cost these days?
Jha: It depends what technology. 7nm will be $10-12 billion. 5nm will probably be $14-18 billion. Remember, what happens in our business is you have to invest a large amount of money, and you’re turning capex into operating cash flow. The biggest risk in our business now is only a limited number of people can use leading-edge technologies. Qualcomm, Apple, Nvidia, AMD, and the FPGA guys are the only ones who absolutely need it. We’re leaving out Intel. Even there, their success and their impact in the foundry business has yet to be determined, quite frankly.
The vast majority of the mobile space is not actually at the leading edge. Only the premium tier is at the leading edge, and that’s now less than 15-20 percent. There’s another economic argument. At the leading edge, it costs you somewhere between $250 and $500 million to develop a chip. Assuming your R&D to revenue ratio is five — you’re spending 20 percent, very much on the high end of all tech companies. It’s usually between 10 and 20 percent, so you’re getting five to 10 times the revenue. You need to make $2.5-5 billion in revenue to develop that.
The number of markets that justify that are shrinking. Servers can justify it. Graphics can justify it. Mobile can justify it. But increasingly, you can only justify it at the premium tier. At the lower tier you can’t. All of that plays into that shrinking. There are really only four companies in the world that can develop leading-edge technologies, and Intel’s technology isn’t generally available. That leaves TSMC, ourselves, and Samsung. TSMC and ourselves are the only ones with a broad range of technologies. If you wanted to combine RF technology with leading-edge technology, TSMC and ourselves are the only ones who could produce it. Samsung tends only to be a leading-edge company.
VB: Given that outlook, are you worried that advances in computing could slow down?
Jha: I’m not. Advances in computing will happen in three different ways. IPC, instructions per cycle, if you look at that it’s slowed down considerably already. Over the last three or four years, architectural improvements haven’t driven a dramatic number of IPC. What’s interesting is that AMD’s success right now is purely architectural. Intel has not really been making architectural advances. It’s mostly been semiconductor technology. I think you’ll see that architectural improvement will drive more than semiconductor technology.
What you can do is pack in more circuit. That goes some way toward the lack of improvement. But architecture is the more efficient way to improve your technology. AMD’s success is architectural success. There’s no doubt that AMD, at 14nm, is competing against Intel’s 10nm because of its architecture, not because of the semiconductor technology alone.
Second, we’ll see packaging solutions. We’ll see stacked memory to provide memory bus bandwidth that wasn’t possible before. You’ll see more 3D and 2.5D packaging. The third thing is solving system problems, as opposed to just CPU problems: combining cameras, understanding AI, using non-von Neumann architectures to accelerate the next-generation problems. Next-generation problems, the vast majority of that will not be about browsing or gaming, but about understanding images and making real-time decisions on structured data.
Von Neumann architectures are not ideal architectures. Non-von Neumann doesn’t necessarily need the scale. We have, what, about 80 to 100 billion neurons in our brains? Sixty percent of our neurons are for pattern recognition. As far as I can tell, we’ve just started our journey into pattern recognition. The feature size of our brains is measured in microns. We’re nanometers and nowhere near. It’s not about scaling. The energy efficiency of our brain, by the way, is about a thousand times higher relative to the same kind of computing, with micron features. It’s about architecture. It’s not just about scale.
Now, I’m not saying we shouldn’t continue to scale. If you want to pack server cores into a big 700mm square chip, scaling matters. But the number of applications where it continues to matter will become lower.
VB: As far as financing those factories, are you optimistic about that? Does it mean the big equity guys are in this in a big way, or the governments of the world?
Jha: Governments have certainly played a role. We’re building a new factory in China, and we’ve certainly gotten support from the government. By and large, I don’t see private equity guys getting into financing the chip business. I’ve not seen them. If anything, we’re running a mile away. They’re very much focused on services and software. AI has certainly gotten a lot of financing. Lidar and radar, automotive things have gotten a lot of venture funding lately.
I would say that semiconductor is receiving more private investment today than it has in the last 10 years, though. Largely because people are beginning to realize that to solve real fundamental problems, you have to control the interface between semiconductor and software. System houses like Google, Tesla, Amazon, Microsoft, they’re all now investing more in semiconductor than they ever have.
We’re working directly with these guys. Before, they bought chips from the semiconductor houses. Now they work directly with us. You’ve seen the tensor flow architecture. That was done by Google working directly with a foundry, albeit not with us. In successive generations, we’re working on a number of architectures with each of those guys. In machine learning and AI, there’s been a lot of funding in the Bay Area, and we’re working with a number of them. The startups are interested in meeting with a silicon company that can provide everything they don’t necessarily have in-house. We have to be willing to scale our capabilities to meet whatever they need.
We’re spending a lot of time increasing our capability to support applications which we see as important. AI is one. Building radars for automotive is another. You have to integrate 77GHz radar. There’s a huge amount of combining radar knowledge with image knowledge. You look at a scene, you look at the distances, and it allows you to make sense both ways. They call that sensor fusion, and it’s becoming very important.
VB: The EPC guys think there’s some solution in alternatives to silicon. Are materials changes becoming important?
Jha: Yes, but not for logic. Lidar is an example. It’s based on photon generation. You can’t generate photons out of silicon. Or, a lot of people have tried without any success. You need materials like indium phosphide, gallium nitride, silicon germanium. They do become more important for specialist tasks, like power amplification, opto-electronics, high-performance applications. We’re one of the leaders in all of this. In RF, I would say generally we have very large market share. We’re seen as leaders.
VB: But that wouldn’t necessarily take away the lion’s share from silicon.
Jha: I don’t think so. People have been developing alternatives to silicon for a long time. Roughly speaking, silicon is a $350 billion industry. I don’t see that changing. In fact, I see a dramatic growth as a result of intelligence going to the edge in semiconductor demand. I think a golden age for semiconductor is coming. You can’t do AI without silicon. Most of the time, people want to use applications where there’s real-time decision-making. “Do you recognize this object?” Those decisions have to be made in real time at the edge. That drives square kilometers of silicon consumption.
VB: Is AI going to have a manpower impact in your industry? Are jobs going to be eliminated?
Jha: Every generation of technology that has come along has been feared for its job reduction. What it does is potentially create more jobs. It’s just that the skill set required for new jobs is dramatically different. A friend of mine has a startup, a box about the size of this chair with wheels, an industrial cleaning device. All the shopping malls and offices today that are being cleaned by a vast number of people can now get cleaned by this. You let four of them loose at 10 p.m. when the mall closes, and at 7 a.m. it’s clean. That’s definitely job reduction. But it’s creating other jobs, monitoring and the like.
Autonomous vehicles have a large impact on the job market. Exactly what the impact is, I don’t know. But one thing I’m certain of is there will be skill set transfer. The economic impact of these technologies, by and large, has not been negative over the last three or four industrial revolutions. We had the first industrial revolution, globalization, the internet, and now AI. That’s four massive changes I can think of, and by and large they have increased jobs.
VB: How is your confidence in Silicon Valley, especially the competitiveness of the U.S. in chips? Governments are really competing to get these chip factories going.
Jha: The manufacturing, certainly, other than Intel and ourselves — we’re the two large manufacturers in the U.S. We have three factories up in New York. There are others with factories, but a lot of the manufacturing in terms of dollar amounts has moved out of the U.S. The innovation continues to be here — not exclusively, but mostly. That’s one reason we’re headquartered in the Bay Area. We’re closer to the next generation of innovation.
You may not have this context, but I was at Qualcomm and I saw this play out. If you look at the fabless industry — the growth of semiconductor largely happened with fabless companies. Mobile drove that growth. The winners in mobile were TSMC, Qualcomm, ARM, Apple, arguably Samsung a bit, arguably Google a bit, although how much Google makes off Android isn’t clear to me. There were lots of losers, but these were the big winners.
Being close to the Bay Area and looking at what technologies are coming along and where to invest — remember, the semiconductor industry was all driven by FPGAs and GPUs. CPUs were all controlled by two companies that were IDMs. I have to understand what industries will drive the next generation of growth and make sure that I’m there with the right technology for those industries. That’s why being in the Bay Area is a real competitive advantage. We work with startups not because they make us revenue but because we get a much better understanding of where the industry is headed.
VB: There used to be an interesting conversation about whether you could afford to put a manufacturing plant in Silicon Valley, or if somewhere else was better. Now it’s about whether your engineers can afford to buy a house in Silicon Valley.
Jha: There’s a school of thought, I don’t know if you’ve heard this, but it says that part of the reason engineers work so hard in the Bay Area is because they have to afford a house here [laughs]. If you’re in Idaho, you don’t have to work so hard. You don’t have to think about where the next startup is and how to succeed. I don’t know if that’s a valid theory, but I’ve heard it said.
VB: How do you feel about diversity issues in the chip industry?
Jha: If you think of the manufacturing side of the chip industry, there’s no diversity issue because the workforce skill set is not at a level where you can talk about diversity in terms of gender or race. From a race point of view, technology generally has been a little more diverse anyway. From a gender point of view, there has always been an issue, but in manufacturing there’s a little less of an issue.
VB: Any other big topics you’re thinking about?
Jha: 5G is going to be very disruptive. We’re extremely well-positioned for 5G. I think 5G will be as disruptive to wireless communication as data was to voice. Data today is completely — remember, it used to be all about the number of minutes. Who cares about minutes anymore? Data is the driver. Today we get, on the high-end, megabits. I think we can get gigabits at a sustained basis. That changes your interaction models dramatically. Second, there’s a lot of attention being paid to security, as well as latency. You can do a lot more things with lower latency.
VB: How soon do you think 5G arrives?
Jha: There’s the old saying about how the future is here, it’s just not evenly spread. A company out of Dallas — I’m on the advisory board — is deploying systems starting early next year. They’ll be small systems, but what will happen in the beginning — Verizon has announced plans to deploy fixed systems early. For it to be mobile, I think probably 2020. But fixed systems can deploy early next year.
VB: Maybe that accomplishes what Google wanted to do with Google Fiber.
Jha: Yes, yes. Also, remember there are lots of satellite systems going up, three or four, to provide coverage as well. Satellite systems work well for remote areas. I don’t know that satellites can compete effectively in dense areas for data delivery.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more