Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
This article is part of the Technology Insight series, made possible with funding from Intel.
“The industrial sector needs 5G standalone,” declares Intel senior principal engineer Richard Burbidge. He’s referring to 5G network infrastructure that doesn’t rely on an existing LTE network. That independence, Burbidge asserts, will provide the speed, low latency, and reliability required in these extra-demanding environments.
And Burbidge would know. With 30 years of experience in the telecom industry and 20 as part of 3GPP, the standards body that governs 5G technology, UK-based Burbidge knows 5G and its suitability to industrial applications like few others. Prior to joining Intel in 2014, he played key technical roles at Research In Motion, Motorola, and Phillips Research Labs. Burbidge also served as chairman of 3GPP’s RAN2 workgroup from 2015 to August 2019 and remains deeply involved in 3GPP’s efforts.
When VB interviewed Burbidge for a story on 5G and its benefits for factory floors, we ended up using two statements from an hour-long discussion — a criminal waste of so much experience and insight. We returned to the 10,000-word transcript with an eye on expanding understanding of how 5G continues to evolve, the role standards play in that evolution, public vs. private architectures, centralized vs. distributed approaches, the advances in Release 16, and how industrial adopters might best gain from the technology.
Here are highlights from our conversation, edited for length.
VB: With so much IoT buzz, it’s easy to focus on 5G’s density and speed improvements. How will these help industrial users?
RB: Chiefly, for the industrial sector, 5G provides the possibility to replace wired networking with wireless. You can reduce the amount of cabling, reduce the problems with cable routing and reconfiguration. You can also have connectivity at a higher density than might’ve been practical to achieve with wired networking. You can have sensors and devices and actuators in far more locations, including on mobile equipment. This gives industrial companies more opportunities to upgrade their environments with minimal disruption to existing infrastructure.
VB: How does 5G enable higher device density?
RB: The standard already had what was necessary from 4G. The density question is more often one of deployment than the underlying technology. If you want to support very high device density, you need to ensure that you put in the cells and that your Node B’s have sufficient connection capacity to support all those devices. It’s more about best practices than technical capability.
VB: With 3GPP’s Release 16 activities now complete, what benefits might be possible for industrial and factory applications?
RB: While Release 15 targeted the needs of existing mobile operators with enhanced mobile broadband operating in licensed spectrum, Release 16 focused more on expanding 5G’s market reach into new segments. The industrial segment was a very important part of that, as you can see in some of Release 16’s functionality. First, there’s time-sensitive networking [TSN] over 5G. TSN refers to an IEEE standard intended to be run over Ethernet. It’s very much aimed toward industrial use cases that have multiple nodes that need to be time-synchronized.
Another one was support for private networks, or “non-public” networks. This gives industrial organizations the opportunity to deploy their own network and ensure that only approved devices within the industrial facility can connect to and use that network.
B: How do you see 5G types being adopted around the world?
RB: The 5G rollout we’ve seen so far has been mostly based on the non-standalone deployment, meaning the mobile operator is relying on the LTE network to work alongside NR. But we are starting to see the first wave of 5G standalone deployments by operators. It’s actually 5G standalone deployments that are needed by industrial users. You won’t see so much delay, latency, and reliability benefit with a non-standalone network.
There’s sometimes a perception that industrial use cases would utilize mobile network operator deployments, and that’s not necessarily a given. Will industrial users want to utilize network operators’ deployments to help support their own factory deployments, or will they prefer to have a totally private network where they own and run the equipment themselves? Maybe we’ll see some of the larger industrials deploy their own network to have total control over it, and I suspect smaller industrials will piggyback off mobile network operators.
VB: Are you saying that larger industrial players have deeper pockets and the need for private infrastructure, whereas smaller industrials are more inclined, maybe because of capex limitations, to go with public infrastructure?
RB: Yes, right. There’s a certain degree of caution in the industry, as well. Larger enterprises may feel there’s less risk if it is their own network, although there are risks both ways.
VB: How does licensed and unlicensed spectrum enter into this?
RB: The licensed assisted mode 3GPP developed for 5G is similar to what was available previously in LTE, where you have a cell that is operating in licensed spectrum operated by a traditional network operator. And it can work in concert with a cell utilizing unlicensed spectrum. This was really for the traditional operators to do offloading into unlicensed spectrum. But in 5G NR, we also added support for standalone deployments. This is something 3GPP never did for LTE. It’s standalone deployment of NR unlicensed that’s relevant to the industrial sector. This enables an industrial organization to deploy their private network in 5 GHz or 6 GHz unlicensed bands. Whether they choose to do that depends very much on the spectrum situation in the country they’re in. My guess is that licensed spectrum is always going to be the first choice for industrial users. But the NR unlicensed does provide that opportunity if there’s no licensed spectrum available.
VB: You hear about [location-based device] positioning as a headline feature of Release 16 for industrial use, but doesn’t the feature predate that?
RB: In release 15, there was some degree of positioning support included, but this was based on the LTE signals. It relied on you having an LTE network available, or it relied on traditional satellite. Release 16 adds what I would call native positioning, based on the signals that are transmitted and received by the NR system itself. This can work both indoors and outdoors. Indoors has an accuracy target of three meters. In the industrial segment, for example, you could imagine it being used for tracking items around a factory or warehouse. It’s probably not accurate enough for real-time robotic control, but there is more to come in terms of accuracy, and that certainly is something that is being progressed in Release 17.
VB: Why and when is low latency important within factories or similar industrial settings?
RB: Say you have a situation where you’ve got multiple robots doing functions, operating in concert with each other. Clearly, those devices need to be tightly synchronized in time. And if you want to synchronize them over your network, then you need to have a network with almost no latency. Now, if the robot is just doing a very repetitive action, then it’s sufficient to get those robots synchronized once, and then they go through their repetitive actions and always work perfectly together. But if you have more complicated robotic control systems where, for example, the controller for those multiple robotic devices is centralized, you’ve got a single controller for multiple devices. For those robotic devices to all work together, you need very low latency communication from that centralized controller to make them do the right things at the right times.
Then, look at more traditional control systems. You typically have a system made up of various sensors and actuators, with some kind of feedback system whereby your sensor input is controlling what you do with those actuators. Minimizing feedback delay in these kinds of control systems is critical. If you want to have a distributed control system operating over a wireless network, then eliminating delay is going to be critical for making those control systems work correctly.
VB: When do you want a distributed control model versus centralized?
RB: Typically, if you have something that is very processing-intensive — for example, some image processing based on AI — you’re going to want to have that in a more centralized location compared to the location of the sensors. For latency reasons, you probably want it located on the premises, not off in the cloud. And as soon as what you’re trying to control is itself distributed over a wide area, then, of course, you end up with a distributed system.
VB: How does 5G improve on latency relative to 4G?
RB: In LTE, the latency was 5 milliseconds. In NR, it would be possible to achieve 0.5 ms in some cases. When I refer to latency, I’m talking about the latency of the radio network, from the Node B to the device, and including the protocol layers, but not the network beyond that.
5G has a lot of flexibility. There are various trade-offs you can make, like between latency and bandwidth. So, it depends how you choose to configure it. But in the best case, you can get down to a half-millisecond latency in NR.
VB: What about barriers to further latency improvements? Why would we need improvements beyond 5G?
RB: I’m sure new use cases will come along that need more latency improvements. One that gets mentioned is the tactile internet, which is often referenced as part of augmented reality. If you want to control things remotely over the internet, the delay requirements may be very tight. You’ll always have certain processing delays in the equipment. No matter what you’ve done to transmit your signal over a very short period of time, there’s always going to be some processing at the receiving side and at the transmit side to prepare to transmit something. The processing delay is a trade-off in how much complexity you want to put into the transmitting and receiving equipment. Fundamentally, there’s always going to be propagation delay. That one will be tricky to improve upon. But yeah, there will be scope to go further with latency than we are now. Also, fewer applications will need these tighter and tighter timings.
VB: What does reliability mean in this discussion?
RB: Reliability means if you transmit a packet of data over your communication system, what is the probability of that packet being successfully received at the far end? With wired Ethernet, the probability of your packet being successfully received is maybe 99.999%. Because with 5G in the factory environment, we’re trying to replace a wired network, reliability has to be close to what factory operators were used to with a wired network.
Even 4G could achieve that five-nines reliability using automatic retransmission mechanisms. In the 4G system, there are different layers at which retransmissions can occur. You can have some retransmissions at the physical layer, higher up in the protocol, and then also in the TCP level. You have those retransmissions to ensure reliability, but all of those come with a trade-off in terms of delay. Consequently, 4G wouldn’t be a suitable replacement for Ethernet in your factory.
VB: How does 5G improve on this?
RB: Well, reliability also means the probability you’ll receive a packet successfully at the receiving end within a certain delay constraint. This is where reliability and latency come together, and you realize they are not totally independent characteristics. 3GPP’s work on reliability in Release 16 is quite flexible and offers various tradeoffs. For example, to achieve reliability, you could have a greater channel coding on the data you want to send. But here you’re trading off reliability with overhead. You can do repetition, so instead of retransmissions when something fails, you can blindly repeat sending it, say, twice or four more times. This could help achieve your reliability target without introducing delay, but this also has an overhead tradeoff.
And even within very tight delay constraints, you may be able to achieve one retransmission at the physical layer level, which does better in terms of overhead than just blindly sending it twice. Of course, with repetition you’re going to double the amount of data you’ve sent, even if the first one was received correctly, whereas if you retransmit only when there’s an error, you’re not introducing as much overhead. But there is some delay associated with that.
VB: Is 5G going to affect how AI is implemented across the factory, including in roles that were conventionally done by humans?
RB: There’s potential, because 5G enables more data to be gathered, but it also enables it to be gathered from certain places that wouldn’t have been otherwise practical. Thinking off the top of my head, maybe you could have industrial plants operating 5G-enabled drones with cameras sending their data to be processed by AI. It’s not just a matter of collecting more data. You’re enabling new opportunities to gather data.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more