VentureBeat: Is there a point on this horizon where we see a 100 percent connected world?
Wall: I don’t know if we’ll ever get to 100 percent. If you’re over 90, that’s pretty substantial. This work on economic segmentation has hugely influenced where we thought the markets were really growing relative to our investment and how we shape our workforce, how we shape our go-to-market. It’s increased our investment in automation considerably. What it did, though, is it led to something.
If you need automation to keep up, you have to drive to a higher degree of automation. The backbone of automation is compute and the IT infrastructure. That led to us looking at energy very broadly, how energy unfolds in the next 30 years. What are the implications for that? Like I said before, one-third of energy currently is used for manufacturing. If that continues to grow, that creates a huge issue in terms of how you keep up with that.
When you look at emerging markets, in order for them to maintain their economic expansion, you have to have substantial investment in their energy production. It has to become much more efficient. Certainly when you look at China or India, not surprising, they’re just trying to keep up, and that translates to coal plants and other polluting energy. You can predict that. But here’s where it got interesting. When you then look at energy and automation, in order to automate, you need to be able to compute on the data you’re collecting. It turns out that the data we’re collecting is so huge and monumental over the next 15 years that you can only process, in a cloud, less than one percent of the data collected.
If you’re going to automate, if you need automation to keep up with the economic cycle, you have to figure out a different way to automate, because you can’t get all the data back to the cloud to run on your cloud-centered machine learning and AI framework.
VentureBeat: Even 5G isn’t going to solve that?
Wall: No. 5G is just a blip. I’ll give you an example, and there are dozens of these you can use, but let’s just take jet engines for airplanes. Rolls-Royce for quite some time has been leasing or selling engines containing more than 5,000 sensors, and those sensors are generating 10 gigabytes of data per second to improve fuel efficiency. You could never send that to the cloud, even if you had backhaul fiber. That’s happening everywhere. It’s in engines. It’s in 3D printing devices. It’s in any of these edge devices that are using sensors in large numbers for efficiency automation.
You see it in Microsoft, Amazon, and Google. Let’s take Amazon as an example. Amazon is moving to this hybrid model. It’s expressed in a software framework they call Greengrass. It’s where they’re taking programming that we know how to do in the cloud and extending that same approach down to edge devices. It’s the same programming model that goes for how you do machine learning and AI in the cloud, and it’s moving to the edge. That’s because of this data and energy constraint. You don’t have the energy or the bandwidth to move it, so you have to do the compute on the edge.
VentureBeat: That means everything has to get a lot smarter, then? All those sensors have to be smart themselves.
Wall: That’s exactly it. Now what happens is, we get to this third big point for us, which is the technology piece of this. We don’t have enough energy to put all the data in the cloud, and we don’t have enough bandwidth to move all this data to drive the automation needed to deal with economic acceleration. It all drives to the edge.
Now what we’re seeing is the emergence of these edge-based machine learning market ventures that are dedicated pieces of silicon that allow you to do AI and machine learning inference on the edge. That’s why you see this explosion of more than 50 different startup companies doing silicon machine accelerators on the edge. The architectures being put in place are 1,000 times more efficient than what you see in a traditional PC. They’re incredibly energy-efficient, dedicated to machine learning, sitting on the edge with this uniform cloud programming framework, whether it’s Azure or AWS Greengrass, sitting on top of it.
Those energy-efficient compute architectures are changing the nature of software development, which leads us to the topic of software 2.0. Software 1.0 was, you write software. You develop an algorithm, code it, comment it, test it. About 90 percent of your effort is spent figuring out the part that solves the problem and maintaining it. The actual writing of the code is a small percentage of the overall life of the code. But now we enter a world in which you don’t write code. Instead, you pull together data. You’re just operating based on data.
A good way to think about it, giving you an example, let’s take a 3D printer. Right now, instead of writing firmware for a 3D printer, I take a machine learning, edge-based, ultra-efficient architecture chip and I take all of the data that comes from that 3D printer, which has thousands of individual sensors and actuators. Which nozzle that gets fired, which loader that gets turned, every sensor for heat, I take all of those and I collect all that data, and I send that to the machine learning chip. The machine learning chip learns that, and then you run it as an inference on the machine. All you do is accumulate the data.
VentureBeat: Does this make you guys want to do your own AI processors, or drive in some other product direction?
Wall: A different product direction. What that did is three things for us. One, there’s a gold rush right now among everyone doing AI and machine learning silicon. Watching that space and tracking the investment in it to track who wins, every large silicon player is doing it. Whoever emerges there is going to be a big winner, and we want to make sure we take advantage of that.
The second piece is, for the first time ever we’re starting to see how IoT consolidates in a standard way. Right now it’s the wild west, with everyone fragmented among different security models and the like. You can start to see how things like AWS with Greengrass, and Azure with their IoT framework, create a standard way in which to do IoT. That becomes an opportunity, because suddenly you can draft off that infrastructure.
That gets to the third point, which is that we need to make sure we’re investing in the main specific AI and machine learning framework. Creating a standard for our products becomes crucial.
VentureBeat: That seems to reinforce a direction. Did you find anything here that makes you want to change the direction of something you’re already doing?
Wall: Many of those changed what we’re already doing. Rather than assuming all of this is going to happen in the cloud, it really pushed us forward to a model where machine learning will happen on the edge. It changed our thinking about what the right partners would be for some of those algorithms. It changed the architecture of some of our products in 3D printing and our graphics print business. It’s changing our thinking on architecture substantially.
VentureBeat: Logically, then, a printer should have a lot more smarts in it in the future, doing things on its own rather than trying to get its intelligence in the cloud.
Wall: I don’t know if I think about it that way. Today we already have a huge number of sensors and actuators sitting in a 3D printer. I don’t think that’s going to change substantially. But it’s what we do with the data. Rather than trying to individually control it, like what goes on in firmware and software today, we’re going to have to accumulate and act on that data, and use the data to drive the sensors more than anything.
One last concept. If you have ultra-efficient edge-based architectures, and you now have software 2.0 where you’re accumulating and using data to drive that, it allows you to go after what we call virtual machines, or digital twins. That’s where we’re really taking a bigger focus. Can we now take an entire system, like a 3D printer or a large graphics printer or even a desktop printer, and create a complete digital twin of it? I can do all of my development on that virtual twin, all of my testing on that virtual twin, and then deploy the physical hardware, doing all of that much faster.
It sounds very futuristic, but it’s what’s going in much of the automotive and self-driving car world today. All of that happens with digital twins. Then it goes out to testing and learning on the roads. We’re trying to take similar ideas and drive it into our core development.
VentureBeat: On the rise of Asia, does that cause some different thinking around technology or products?
Wall: It affects products clearly just because of differences in culture and importance. It changes our design synergies. Over time we’ve been looking to Asia more and more, unsurprisingly. I’ll tell you what it affects more. It affects us more strategically, in our market motions. If you look at HP’s business today, about 45 percent occurs in the Americas, about 30-35 occurs in Europe, and the rest, about 20 percent of our business, occurs in Asia. The growth is all going to be in Asia, so we’ve shifted our investment to strategically build out the go-to-market motions stronger across Asia. That’s a big impact.
VentureBeat: Are there any other areas you wanted to mention, anything we missed?
Wall: I gave you the broad overview, the high level, and that’s really where we’ve focused more for megatrends this year. The changing nature of economic segmentation, how that impacts energy and data, and then what that means for compute. We have some nice white papers with a lot more detail and we’ll be happy to share.
VentureBeat: How are you going to talk about this and use it going forward?
Wall: In terms of the use, a lot of that goes to our internal processes and our overall planning cycles. In terms of the rollout, we’ll be sharing our findings, because while it affects us, it also has big value outside as well.