We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A spotlight shines brightly now on edge computing architecture, as it looks to take on jobs now confined to incumbent cloud computing methods. 

Advocates hope edge computing will reduce the amount of data sent to the cloud, provide real-time response and maybe save on some of the mysterious line items that show up on an enterprise’s cloud computing bills. 

Moving some runtime AI processing away from the cloud and to the edge is an oft-cited goal. Still, the use of graphic processor units (GPUs) for AI processing on the edge incurs costs too. 

Edge is still a frontier with much to discover, as seen at a recent session on Edge intelligence implementations at Future Compute 2022, sponsored by MIT Technology Review.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

How much does AI cost?

At Target Corp., edge methods gained acceptance as the COVID-19 pandemic disrupted usual operations, according to Nancy King, the senior vice president for product engineering at the mass-market retailer. 

Local IoT sensor data was used in new ways to help manage inventories, she told Future Compute attendees. 

“We send raw data back to our data center towards the public cloud, but oftentimes we try to process it at the edge,” she said. There, data is more immediately available.

Two years ago, with COVID-19 lockdowns on the rise, Target managers began to process some sensor data from freezers to guide central planners regarding inventory overstock or shortfalls, King said.

“Edge gets us the response that we might need. It also gives us a chance to respond quicker without clogging up the network,” she said.

But, she noted concerns about the costs to run GPU-intensive AI models in stores. So, it seems, the issue of AI processor costs is not solely confined to the cloud.

With edge AI implementations, King indicated, “cost for compute is not decreasing fast enough.” Moreover, she said, “some problems don’t require deep AI.”

Edge orchestration

Orchestration of workflows on the edge will call for coordination of different components. That’s another reason why the move to edge will be incremental, according to session participant Robert Blumofe, executive vice president and CTO at content delivery giant Akamai. 

Edge computing approaches, which are closely related to the increased use of software container technologies, will evolve, Blumofe told VentureBeat. 

“I don’t think you’d see any uptake without containers,” he said. He marked this as part of another general distributed computing trend: to bring the compute to the data and not vice-versa.

Edge, in Blumofe’s estimation, is not a binary edge/cloud equation. On-premises and middle-level processing will be part of the mix, too.  

“Ultimately, a lot of the compute that you need to do can happen on-premises, but not all of a sudden. What’s going to happen is that data is going to leave the premises and move to the edge and move to the middle and move to the cloud,” he said. “All these layers have to work together to support modern applications securely and with high performance.”

The move to support developers working on the edge plays no small part in Akamai’s recent $900-million purchase of cloud services provider Linode

Akamai’s Linode operation recently released new distributed database support. That’s important because the area of databases will need to undergo changes as new edge architectures arise. Architects will balance edge and cloud database options.

Balance and re-balance

Naturally, early work with edge computing leans toward prototyping more than actual implementation. Implementers today must anticipate a learning period where they balance and re-balance types of processing across locations, said session participant George Small, CTO at Moog, a manufacturer of precision controls for aerospace and Industry 4.0. 

Small cited oil rigging as an example of a place where quickly accumulating timescale data must be processed, but where not all the data needs to be sent to the data center. 

“You might end up doing highly intensive work locally,” he said, “and then only push the important information up [to the cloud].” Architects must be mindful of the idea that  different processes operate in different timescales.

In IoT or Industrial IoT applications, that means edge implementers must think in terms of event systems that mix tight embedded edge requirements with looser cloud analytics and systems of record.

“Reconciling those two worlds is one of the architectural challenges,” Small said.  While learning on the edge continues, “it doesn’t feel too far away,” he added.

AI can explain

Much of the learning process involves Edge AI, or edge intelligence, that places machine learning in a plethora of real-world devices. 

But there are humans on this edge, too. According to Sheldon Fernandez, CEO of Darwin AI and moderator of the MIT edge session, many of these devices are ultimately managed by people in the field and their confidence in devices’ AI decisions is crucial. 

“We’re learning that, as devices get more powerful, you can do substantially more things at the edge,” he told VentureBeat. 

But these cannot be “black box” systems. They need to present explanations to workers “who complement that activity with their own human understanding,” said Fernandez, whose company pursues alternative approaches supporting “XAI” for “explainable artificial intelligence.”

On the edge, people doing jobs need understanding of why the system classifies something as problematic. “Then,” he said, “they can agree or disagree with that.” 

Meanwhile, he indicated, users of AI processing now can choose from a gamut of hardware, from regular CPUs to powerful GPUs and edge-specific AI ICs.  And, doing operations near to the point where the data resides is a good general rule. As always, it depends.

“If you’re doing simple video analysis without hardcore timing, a CPU might be good. What we’re learning is, like anything in life, there are few hard and fast rules,” Fernandez said. “It really depends on your application.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics