Presented by Intel

If there’s one goal every manufacturer in the multi-trillion-dollar industrial segment shares, it’s operating a factory free from production defects. According to several studies by Intel spanning 2018, 2019, and 2020, AI and edge computing make it possible to positively identify up to >99% of visible manufacturing defects before a product ever leaves the line.

“One of the most important things manufacturers care about is product quality,” says Brian McCarson, Vice President and Senior Principal Engineer, Internet of Things Group (IOTG) at Intel Corporation and a featured speaker at Transform, VentureBeat’s upcoming digital conference. “Manufactures prefer throwing away fewer defective products. They strive to have less rework and fewer customer returns. They also want to reduce the cost of their operations by making their tools and processes more efficient, and improve the reliability of their machines so they can proactively do maintenance before it is too late and have more predictable uptime.”

That’s why on-floor factory edge computing solutions are transforming the entire industry, says McCarson, who specializes in the industrial segment at Intel IOTG and is laser focused on helping upscale the efficiency and capability of the industrial ecosystem.

Edge computing is enabling real gains in real factories by bringing AI compute closer to the origin of data, closer to the mass of sensors connected to the machines, and closer to devices on the factory floor. Instead of being sent to a remote data center or the public/private cloud, data is processed and acted on right there at the source. Factories are reaping the benefits of end-to-end solutions, from the time the data is created and ingested until the time a meaningful insight has been generated from the data, with some companies demonstrating >99% detection in manufacturing defects at the production step where the defects were generated.

The edge advantage on the factory floor

The human eye and brain are fantastic at a few types of pattern and feature recognition. In a thousand photographs, you’ll spot someone you recognize at a glance. Our eyes and brains can detect motion, or screen out unnecessary information in a crowded scene, to zero in on the object we’re looking for.

What humans are not good at is long-term repetitive scanning tasks where we’re looking for the kind of extremely subtle variations that, even if just a fraction of a millimeter off their specifications, could result in a product not working properly, or not working at all.

“Even on a high-definition camera, some factory defects are less than a pixel in size,” McCarson says. “One tiny little dot on your screen could be an early warning that a product may not work as designed in the market.”

Automatic defect monitoring systems constantly scan products coming off the machines to ensure they meet all the necessary quality indicators. A camera running an AI algorithm to detect defects can have >10 times better accuracy than the human eye and can analyze >100 times more results in a unit of time than the human eye is capable of.

This saves capital costs, labor costs, and rework costs. It helps manufacturers become more competitive in their economic environment. And as a huge benefit for the planet, it creates a much smaller ecological footprint for factories by reducing waste.

But all this requires an enormous amount of data that would be far too expensive to send over the network to be analyzed in the cloud, and then have results sent back in order to take action on them. Data volume becomes the barrier when relying on the cloud. It takes a significant amount of data to train an AI model or algorithm in the cloud, but thousands of times that amount is generated from the sensor. Sending all the generated data from the sensor to the cloud could significantly increase your network infrastructure cost. Meanwhile, not all data generates the same value. An image of a defect product is more valuable in training an AI model than the image of a normal product. Not to mention the increased time to make a decision and increased security risks with all the data transmitting to the cloud.

“There are a lot of scenarios where it just makes good economic sense to process as much information as possible right at the edge,” McCarson says. “You reduce network costs, reduce the amount of data center volume you have to pay for, and only store the data that is most critical to managing your industrial applications, managing your factory, or managing your quality control process.”

Real-world results

Work in the automotive sector has been a proving ground for using edge computing and AI on the factory floor. Cars and their various components are required to be reliable for 100,000+ miles within just a few years, and to withstand harsh stop-and-go conditions, quick cold starts, hot starts, and more.

“We’ve been able to see some real-world examples of using the value of compute, a high definition camera, and a continuous stream of machine data or time series data,” says McCarson. “We’re finding that these automotive parts have really tight manufacturing specifications, things that the human eye can’t detect when there are variations. But a camera can. AI systems can.”

AI quality control systems on production lines helped improve manufacturing productivity significantly, because they’ve been shown to detect up to 99% of all the defects coming off that machine in the right circumstances, whereas human eye inspections might only be able to detect a small fraction of those defects, he says.

“And if you look at the contribution of factories toward the greenhouse gas emissions that are driving global warming, if we can make a small improvement in manufacturing efficiency and reduce the number of wasteful reworks by having AI systems help us detect even just the simplest manufacturing defects, we can drive a very significant and meaningful benefit to the ecology of our planet,” McCarson adds.

Implementing AI

AI is an essential tool for business and industry, with tremendous benefits, but companies need to start AI with scalability in mind, McCarson says. Lots of companies out there offer a quick fix to specific challenges, but a look under the hood of that solution shows a lot of hard-coding, or a lot of severe restrictions on how it can be used, or both.

Data scientists are very expensive, and hard to come by — all the more reason that AI needs to be made easier and more scalable. Factories can’t afford to have a custom model or algorithm for every machine in a factory. Owners can’t even afford to have custom models and algorithms developed for every factory, if they own many of them.

And if you start with the assumption your business is likely to change in six months, 12 months, or two years, you need to be asking yourself the question, is this a scalable capability? Is this using a communication protocol that’s going to be easily transferable to my other machines and equipment and software in my factory? Is this something that’s going to be relatively low maintenance? Has someone thought about the scalability or ease of upgrade in the future in their design?

“If they haven’t, then you run the risk of having a quick fix that you find breaks in within a few months, and then you’re struggling to find someone who can fix this,” he says. “You’re reintroducing that same expense a second or third time as you try to get it right. You have to plan for scalability in the design of your models and algorithms, if you really expect them to pay off.”

Learn more from Brian McCarson at Transform, the virtual AI event for enterprise execs, July 15-17. Brian will be speaking on Friday, July 17 with VentureBeat CEO, Matt Marshal. Check out the full agenda here.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact