Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
This week marks the start of the Conference on Computer Vision and Pattern Recognition (CVPR), an academic convention cosponsored by the Institute of Electrical and Electronic Engineers’ Computer Society and the Computer Vision Foundation. It’s grown substantially since 1983, its inaugural year, and now sees thousands of research paper from tens of thousands of researchers submitted annually. In fact, for the first time, this year it accepted over 1,000 studies from a pool of 5,165.
Intel’s research team is one of the many who put forth their work for consideration, and in a post this morning on the company’s AI blog, it highlighted a few of the papers that passed muster with the conference organizers. “Intel believes that technology — including the applications we’re showcasing at CVPR — can unlock new experiences that can transform the way we tackle problems across industries, from education to medicine to manufacturing,” said managing director and senior fellow at Intel Labs Rich Uhlig in a statement. “With advancements in computer vision technology, we can program our devices to help us identify hidden objects or even enable our machines to teach human behavioral norms.”
“Acoustic Non-Line-of-Sight Imaging,” a paper coauthored by scientists at Intel Labs and Stanford University, describes a system that’s capable of constructing digital images that reveal what’s waiting around the corner. Their so-called non-line-of-sight (NLOS) technology taps sets of speakers and off-the-shelf microphones to capture the timing of the returning acoustic echoes, which inform algorithms inspired by seismic imaging to generate pictures of hidden objects. It’s not the first approach of its kind — MIT in 2017 detailed a camera that similarly reconstructs out-of-view scenes by analyzing shadows — but the paper’s coauthors say their method scales to longer distances and boasts shorter exposure times.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Intel’s second paper — “Deeply-supervised Knowledge Synergy for Advancing the Training of Deep Convolutional Neural Networks by Dawei” — posits a new training technique for convolutional neural networks, a type of AI model commonly applied to analyzing visual imagery. Theirs cuts down on training time compared with alternatives by creating “knowledge synergies” that enable models to transfer knowledge to other components of their internal networks, and in the process mitigating noisy data and boosting both the models’ accuracies and data recognition capabilities.
“Interpretable Machine Learning for Generating Semantically Meaningful Formative Feedback,” a collaboration between Intel Labs and Istanbul’s Koc University, describes a framework that might inform a machine learning system capable of teaching children on the autism spectrum how to express and recognize emotions. The scientists cite research suggesting that children with autism, who normally struggle to glean the intent behind facial and vocal cues as naturally as their neurotypical peers, can improve if they’re provided feedback by an expert. The team reports that in experiments conducted on a children’s voice data set with expression variations, the proposed mechanism generated feedback “aligned with clinical expectations.”
The fourth and final study spotlighted by Intel — “PartNet: A Large-Scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding” — involved Stanford, Simon Fraser University, and University of California San Diego researchers in addition to a resident scientist at Intel’s AI Lab, who describe a large-scale data set of 3D objects annotated with fine-grained information. As the coauthors explain, identifying objects and their parts is critical to how humans and robots alike interact with the world, but relatively few 3D annotated corpora comprising instance-level, hierarchical part information are publicly available. To solve this, the team compiled their own (with over 573,585 part annotations for 26,671 shapes across 24 object categories) and established three benchmarking tasks for evaluating 3D part recognition, and additionally tested their corpus on four state-of-the-art 3D AI algorithms and introduced a novel method for part instance segmentation.
They report “superior” performance over existing methods.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.