Presented by NVIDIA

On the third day of Transform 2020, the IoT, AI at the Edge, and Computer Vision Summit presented by NVIDIA underscored the tremendous promise of these technologies. IoT is being leveraged in more transformative ways than ever, the limits of compute power on devices keep getting pushed, and computer vision models are becoming faster and more accurate.

But innovation also brings new challenges. Leaders from NVIDIA, BMW, Pinterest, Intel, Uber, and Red Hat among others gathered to talk about the most important new use cases and the most urgent issues: from ensuring greater user privacy to enabling lower latency, accelerating better search and personalization, advancing automation, delivering real-time intelligence, and more.

Implementing new AI technologies also brings new responsibilities like security, governance, accuracy, and explainability, as well as a major focus on eliminating biases around race and gender.

Here’s a look at some of the top panels of the summit.

Bringing the power of the data center to IoT & edge AI

Edge computing can solve specific business problems that demand some combination of in-house computing, high speed, and low latency that cloud AI can’t deliver, explained Deepu Talla, NVIDIA VP and GM of Embedded and Edge Computing.

The hardware and architecture that can support edge computing has improved significantly over the past year, including GPUs with Tensor Cores for dedicated AI processing, plus secure, high-performance networking gear. And edge server software is growing more sophisticated as well, such as NVIDIA’s EGX cloud-native software stack, which brings traditional cloud capabilities to the edge of the network. He also pointed to the company’s industry-specific application frameworks such as Metropolis for smart cities, Clara for health care, Jarvis for conversational AI, Isaac for robotics, and Aerial for telecommunications — each supporting forms of AI on NVIDIA GPUs.

Businesses shouldn’t ditch cloud AI — rather, the choice of infrastructure varies depending on the business need. In the case of health care, each hospital room could have a camera at the entrance to count or monitor people in rooms — with patients able to make requests using speech recognition. All of this happens in the cloud but there’s a strong need for real-time edge processing as well. It’s important to look at each use case to determine when a cloud, edge, or hybrid approach makes sense.

The convergence of these technologies for business applications

“Running and retraining models at the edge is going to define the next decade,” said Anthony Robbins, Vice President of the North America Public Sector at NVIDIA.

However, progress in edge AI deployment requires advances in batteries, chip sets, algorithms, and other areas, said Nand Mulchandani, Acting Director and CTO, U.S. Department of Defense Joint Artificial Intelligence Center. The government relies on the technology breakthroughs of the private industry to advance their own AI deployments, he explained, and the end-to-end process is an incredibly complicated one — making this next generation a ripe area for investment.

Josh Sullivan, Booz Allen Hamilton VP and Head of Modzy, noted that inflexible, proprietary tech stacks just don’t work long term.

“Open architecture solutions that allow your teams to use the tools and languages and frameworks that make sense and integrate into your tech stack and remain extensible into your future is paramount,” he said. “I don’t think a lot of people understand if you’re going to really use AI at scale, it’s going to affect every layer of your tech stack. You have to have an ecosystem that allows lots of integrations with this to work at scale.”

Digital transformation through AI at the edge

​BMW produces a car every 56 seconds, and millions of parts flow into the automaker’s factories from over 4,500 suppliers involving 203,000 unique parts numbers, explained Jimmy Nassif, Head of IT Planning Systems at BMW Group. To manage logistics, BMW tapped NVIDIA to develop five navigation and manipulation robots that transport materials around warehouses and organize individual parts by leveraging the company’s Isaac, Jetson AGX Xavier, and DGX platforms.

The robots, trained on both real and synthetic data, are using computer vision techniques to recognize specific parts as well as people and potential obstacles in a range of challenging lighting conditions. BMW engineers from around the world can remotely log into their simulator based on NVIDIA’s Omniverse platform to ensure that the algorithms are continually retrained to stay accurate.

Meanwhile, in the retail industry, Malong Technologies uses machine learning to recognize products at retail self-checkouts, using overhead cameras that feed footage of objects on scanning beds, explained Matt Scott, Co-Founder and CEO of Malong. On-premises NVIDIA hardware runs algorithms on the edge to protect consumer privacy, trained with supervised learning to spot accidental or intentional mis-scans.

Edge computing also makes Malong’s platform scalable and cost-effective, able to cover thousands of stores without the latency that might be introduced by server-side processing.

Jered Floyd from Red Hat’s Office of the CTO emphasized that AI industry use cases like these depend on open platforms — for example, TensorFlow, Jupyter Notebook, and Kubernetes. Open source helps companies plug and play the best technologies for a problem to create the most effective solution rapidly.

Red Hat’s Open Data Hub, the foundation of the company’s own data science software development stack, is designed to help engineers ideate AI solutions without incurring high costs or having to master modern machine learning workflows — which allows rapid innovation using new applications and new technologies, Floyd said.


How companies are making products faster and better with AI technologies

Claire Delaunay, VP of Engineering, NVIDIA, led the invite-only executive forum roundtable, where VIPs gathered to discuss the use of autonomous machines, robotics, AI, and machine learning in the industrial and manufacturing sector. The conversation ranged from the future of AI technology on engineering and quality control and R&D processes; using IoT and AI at the edge to make factory logistics, supply chains and production more efficient; and performing ML-driven predictive analytics to increase overall systematic awareness, intelligence, integration, and more.

The Women in AI Awards

Transform 2020 wrapped up with the annual Women in AI Awards, recognizing women who have made outstanding contributions in the AI field. Three NVIDIA researchers were nominated in the AI Research category: Sanja Fidler, Professor at the University of Toronto and Director of Artificial Intelligence at NVIDIA; Sifei Liu, Senior Research Scientist; and Anima Anandkumar, Bren professor and Director of ML Research, California Institute of Technology and Director of ML Research at NVIDIA. Anandkumar was named the winner of the VentureBeat ’Women in AI’ award in the AI Research category.

Check out all the sessions from the Summit here. Learn how industry leaders are implementing edge computing and computer vision across industries, and the ways they’re unlocking value and delivering ROI.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact