VB: Intel came up with their own history of micro servers, kind of.
Feldman: They did! [laughs] Isn’t that remarkable?
VB: They didn’t really mention SeaMicro that much. How one of the Intel fellows was just bashing his head against a wall for a bit and then got listened to.
Feldman: Let me tell you. When we had a box up and running, he still had a PowerPoint. Their weaknesses in the Atom reflect a schizophrenia in that company. They don’t want ARM to come up from below. That’s an important thing, because in the history of compute, everyone gets beaten from below. So they want to have parts there – they want to say they’re a player and a visionary – and yet they really want you to buy a Xeon. That’s why their Atom parts are weak.
VB: What I don’t quite understand with computing in general is how it seems to bounce back and forth between centralized and decentralized.
Feldman: Why is that hard to understand? We do the same thing in networking.
VB: Well, not hard to understand, but why don’t you just make up your mind?
Feldman: Are you a football fan? The same thing happens in many competitive arenas. Offensive linemen get big, so defensive linemen get fast. That game happens. You have these bigger and bigger guys on the offensive line, so all of a sudden the speed rusher came in. He could get outside the big guys fast.
So you had centralization and more centralization. Now the networking, the seed of connectivity, got to be such that you could have decentralization. You could go to the cloud. Originally you had centralization because the cost of compute was so high. That was the shared compute of the early ‘70s. You had your modem. The real limited resource was the compute cycle.
Now we’re centralizing compute at Amazon or Verizon or the other major cloud companies because communication is fast. That’s very interesting. You see the same thing in networking. We used to have decentralized routing. That’s what routers do. Now we’re going to an SDN model where it’s centralized. You have a big view of what’s happening in the network, rather than everybody having two hops of view.
VB: With the way it’s going now, how does that help you, say, beat the traditional approach?
Feldman: In a number of ways. The big driver in the market today – and what’s so different today – is the power and the rate of growth of the mega data center and the big data center guys. Just to give you an idea, I think it took JPMorgan Chase 100 years to be one of the largest consumers of compute. It took Facebook four years. We’re talking about crazy new business models that produce unbelievable demand for compute.
That demand for compute isn’t the same as the old demand for compute. They have had to rethink their software. That means the work load is different. That means the underlying machine, the CPU, is different. In the cloud, when you go to an Amazon AWS or to Verizon, you don’t know what CPU your using. The brand has been disintermediated. It’s been removed. You just a get a slice. That’s true in the private cloud, too. If you’re an engineer, you don’t care. What you want is eight gig DRAM and some compute. Those are tremendous changes, which in our view work very much against Intel.
There are some fundamental changes as well — the rise of the ARM ecosystem, the fact that these very large demanders of compute would like customization. That is done extremely easily in an ARM ecosystem and very painfully in the traditional Intel approach. In the x86 world it takes three or four years and $400 million to build a part. In the ARM world it takes 18 months and $30 million. You can do a custom part for a very different type of customer.
How that relates to VentureBeat is, when was the last time there was a startup doing an x86 part? Montalvo? They raised $150 million and blew up. There’s no innovation there, no startups doing it, because it’s too expensive. There are many startups doing ARM parts of one type or another, because you can do it for a reasonable amount of money.
VB: There’s also the GPU compute wave.
Feldman: There is. That’s part of the same general thrust, where you can specialize your compute for a particular type of work. That’s a great example of a type of work that’s better done with a slightly different type of core, a graphics core, than is done with a traditional processor core. That’s very much the same notion. There is so much work now, and the distribution of work is such that you can use a particular type of engine, a graphics engine, to do that type of work. You can use an ARM engine to do this type of work. You can use an x86 engine to do this type of work.
VB: So micro servers and GPU compute are benefiting each other? Or are they competing in some way?
Feldman: I’d say they’re benefiting from the same underlying trend. One size doesn’t fit all in compute. Not in form factor, not in processor. That’s really what’s happening.
VB: How do you guys stack up now against Intel? They seem to have another refresh wave coming here.
Feldman: We’re slightly smaller. [laughs] I think we stack up really well. Our platforms, ranging from the client side all the way through servers, are extremely strong right now. We’ve made super progress. Our small core parts, which is the most interesting part of the server market right now, are better not just than Centerton, which they’re shipping now, but better than Avoton, which they will be shipping soon. We’re pleased with how we stack up right now.
VB: The percentage of the server business that shifts to micro servers, are you getting a better sense—
Feldman: More than 20 percent. That’s our estimate.
VB: Last year, Intel was talking about how it would be 10 percent. It’s already over 20 percent? Or is that projected?
Feldman: Projected, for 2016. Some of the largest micro server customers don’t report. Google doesn’t report to IDC. But if you look carefully, I think it’s already in the four to six category, and we’ll continue with steady growth.
VB: Who are some other big names that have endorsed it?
Feldman: We have a big announcement at the end of the month, a really big one. Stay tuned.