My main hope for 2015 is that the industry can move on from the “What is the cloud, public clouds are good/bad, private clouds are good/bad” debate and essentially become more nuanced in our understanding.

Generally the experience of 2014 is that most people like the model of consumption matching demand, but still have concerns on the nature and types of workloads that we can port. 2015 is the year when we stop talking about the cloud as a “What if?” choice and start talking about the different architectures and their suitability for your workloads.

So unlike some commentators who have thrown the collective towel in on cloud computing focusing on purely public cloud over the Internet and calling it a “commodity” or the domain of a few, we are students of history and think we are merely in the Precambrian phase of cloud evolution and look toward the next more impactful stage.

1) The difference in delivery of private cloud and public cloud will disappear.

The desire to have public cloud on-demand services with private cloud security and control will be realized through the integration and automation of network with the traditional trio of CPU, RAM and storage, but not in the way that most predict.

There will be an absolute coming together of network infrastructure with direct control — not simply the “work-round, cop-out” method of virtualized software-defined networking overlay over the Internet. For computing, read Processing, and for Network read Inter-Process Communication. This means “real” networks with “real separation” being automated, and the computing and core routing act increasingly as policy and data. It’s not in the labs anymore. It’s in networks globally.

Robert Metcalfe, the inventor of Ethernet and later John Gage, Marc Andreessen, and Professor John Day all remind us that networking is simply inter-process communication between computing pools. Fast forward to today, and the network and the computer are increasingly integrated, enabling expanded choices for workloads and the building of distributed computing platforms across the globe with implicit internal routing.

Robert Metcalfe’s description of network as inter-process communication is writ large across 2015.

2) This architectural evolution/revelation will give rise to the debate about the super center versus distributed cloud computing.

Distributed computing, like the Internet before it, has subtler entry points, so it will be more pervasive and flexible than the dumb access model of a big super center “somewhere” in the cloud.

3) Machines talking to machines will become the norm.

Smart phones are not smart without a network. The Internet of things is everything talking to central decision makers, so as the network and the computer merge, it starts to facilitate communication at the edge and the core.

For machine-to-machine (M2M) communication to become smarter and broader in application, it needs a global but local platform, with secure separation to broaden its applicability.

If you are providing “smart” services to towns and cities across Europe, you need to be able to comply with Europe’s laws, and the performance — as the workloads become larger — and the inter-process communication become more critical. This means that, like the Internet, distributed computing will become the fast and more agile solution for M2M in Europe.

Matthew Finnie is chief technology officer of Interoute.