We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Algorithms have always been at home in the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AI’s leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.
To traditional AI specialists, deep learning (DL) is old hat. It got its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. It’s neural networks that have allowed computers to see, hear and speak. DL is the reason we can talk to our phones and dictate emails to our computers. Yet DL algorithms have always played their part in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yep, the real world.
Deep learning could do much to improve your business, whether you are a car manufacturer, a chipmaker or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven to be more challenging than many expected. This is why we’ve been talking about smart refrigerators doing our shopping for years, but no one actually has one yet. When algorithms leave their cozy digital nests and have to fend for themselves in three very real and raw dimensions there is more than one challenge to be overcome.
The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That doesn’t quite cut it in the real world. “If a tomato harvesting robot sees only 80% of all tomatoes, the grower will miss 20% of his turnover,” says Albert van Breemen, a Dutch AI researcher who has developed DL algorithms for agriculture and horticulture in The Netherlands. His AI solutions include a robot that cuts leaves of cucumber plants, an asparagus harvesting robot and a model that predicts strawberry harvests. His company is also active in the medical manufacturing world, where his team created a model that optimizes the production of medical isotopes. “My customers are used to 99.9% accuracy and they expect AI to do the same,” Van Breemen says. “Every percent of accuracy loss is going to cost them money.”
To achieve the desired levels, AI models have to be retrained all the time, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all that data has to be annotated by humans. To solve that challenge Van Breemen has outfitted each of his robots with functionality that lets it know when it is performing either well or badly. When making mistakes the robots will upload only the specific data where they need to improve. That data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemen’s team only gets a hundred or so, that are then labeled and tagged and sent back to the robots for retraining. “A few years ago everybody said that data is gold,” he says. “Now we see that data is actually a huge haystack hiding a nugget of gold. So the challenge is not just collecting lots of data, but the right kind of data.”
His team has developed software that automates the retraining of new experiences. Their AI models can now train for new environments on their own, effectively cutting out the human from the loop. They’ve also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: “It’s somewhat paradoxical because you could argue that a model that can annotate photos is the same model I need for my application. But we train our annotation model with a much smaller data size than our goal model. The annotation model is less accurate and can still make mistakes, but it’s good enough to create new data points we can use to automate the annotation process.”
The Dutch AI specialist sees a huge potential for deep learning in the manufacturing industry, where AI could be used for applications like defect detection and machine optimization. The global smart manufacturing industry is currently valued at 198 billion dollars and has a predicted growth rate of 11% until 2025. The Brainport region around the city of Eindhoven where Van Breemen’s company is headquartered is teeming with world-class manufacturing corporates, such as Philips and ASML. (Van Breemen has worked for both companies in the past.)
The sim-to-real gap
A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital ones. A self-driving car that is trained in the US will not automatically work in Europe with its different traffic rules and signs. Van Breemen faced this challenge when he had to apply his DL model that cuts cucumber plant leaves to a different grower’s greenhouse. “If this took place in the digital world I would just take the same model and train it with the data from the new grower,” he says. “But this particular grower operated his greenhouse with LED lighting, which gave all the cucumber images a bluish-purple glow our model didn’t recognize. So we had to adapt the model to correct for this real-world deviation. There are all these unexpected things that happen when you take your models out of the digital world and apply them to the real world.”
Van Breemen calls this the “sim-to-real gap,” the disparity between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned AI researcher from Stanford and cofounder of Google Brain who also seeks to apply deep learning to manufacturing, speaks of ‘the proof of concept to production gap.” It’s one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng paying more attention to cleaning up your data set is one way to solve the problem. The traditional view in AI was to focus on building a good model and let the model deal with noise in the data. However, in manufacturing a data-centric view may be more useful, since the data set size is often small. Improving data will then immediately have an effect on improving the overall accuracy of the model.
Apart from cleaner data, another way to bridge the sim-to-real gap is by using cycleGAN, an image translation technique that connects two different domains, made popular by aging apps like FaceApp. Van Breemen’s team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated camera’s observed a simulated robotic arm picking up a simulated object. They then developed a DL algorithm based on cycleGAN that translated the images from the real world (three real camera’s observing a real robotic arm picking up a real object) to a simulated image, which could then be used to retrain the simulated model. Van Breemen: “A robotic arm has a lot of moving parts. Normally you would have to program all those movements beforehand. But if you give it a clearly described goal, such as picking up an object, it will now optimize the movements in the simulated world first. Through cycleGAN you can then use that optimization in the real world, which saves a lot of man-hours.” Each separate factory using the same AI model to operate a robotic arm would have to train its own cycleGAN to tweak the generic model to suit its own specific real-world parameters.
The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms change from mere observers to decision-makers, giving robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, like moving a robotic arm to fold a box. A reinforcement algorithm could find out there are more efficient ways to fold boxes outside of their preprogrammed range.
It was reinforcement learning (RL) that made an AI system beat the world’s best Go player back in 2016. Now RL is also slowly making its way into manufacturing. The technology isn’t mature enough to be deployed just yet, but according to the experts, this will only be a matter of time.
With the help of RL, Albert Van Breemen envisions optimizing an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize profit. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting and fertilizer, and then chooses the scenario where the plants grow best. The winning scenario is then translated back to the three-dimensional world of a real greenhouse. “The bottleneck is the sim-to-real gap,” Van Breemen explains. “But I really expect those problems to be solved in the next five to ten years.”
As a trained psychologist I am fascinated by the transition AI is making from the digital to the physical world. It goes to show how complex our three-dimensional world really is and how much neurological and mechanical skill is needed for simple actions like cutting leaves or folding boxes. This transition is making us more aware of our own internal, brain-operated ‘algorithms’ that help us navigate the world and which have taken millennia to develop. It’ll be interesting to see how AI is going to compete with that. And if AI eventually catches up, I’m sure my smart refrigerator will order champagne to celebrate.
Bert-Jan Woertman is the director of Mikrocentrum.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!