One of the earliest use cases in both AR and VR was using these new mediums as a canvas for different creations. From Google’s Tilt Brush to paint in 3D space in VR to their “Just a Line” app to draw in 3D space using their AR framework, ARCore. Magic Leap also launched its highly anticipated mixed reality glasses with a similar app, Creat.
All these apps show a potential use case and a new way we to create digital content in the new age of AR/VR. But the question is: Will we ditch our old 2D ways and shift to 3D mediums like AR/VR? This question is important as it has some serious believers — like Elon Musk — who believe it is more “natural” for us to create in 3D environments as we live in a 3D world.
Four years ago even before the new hype of VR, Musk published a video on this matter and said that they’re experimenting new styles of designing in 3D using tools back then like Leap Motion and Microsoft Kinect in his company SpaceX. He said they try this way because it is not “natural” for us to design in 2D environments like computer screens because we live in a 3D world.
Also other people like Meron Gribetz, CEO of Meta believe in this concept and he envisions a future where we design and also do our normal daily tasks in 3D space. You can see him describing this concept in this TED talk.
Designing vs. drawing
Companies like Google and Magic Leap has chosen a different approach from concepts like Musk’s. Google and Magic Leap apps are for creative artists. There are great artistic creations built with Google Tilt Brush which are amazing and promise that AR/VR has the potential for a completely new way for artists, while people like Musk are looking for ways to push this concept for industrial design use cases.
But there’s a very important difference between artistic creations and industrial designs. In artistic creations you don’t have to care for precision and you dance with your designs, while in industrial and enterprise designs we talk about millimeters and pixel accuracy and designers have to be absolutely precise in their designs.
If you have ever experienced designing in applications like Tilt Brush or have tried to grab an object while wearing a Meta 2 AR headset you know it is very hard to have all three dimension parameters on your hand and also be precise. It is somewhat like designing your model in “perspective” view of 3D modeling softwares like 3D Max.
What enterprises are doing now
Though we are very early in AR/VR, enterprise companies in industrial fields have adopted AR/VR and are using these technologies to their benefits, but not as a way to completely shift their current procedures to these mediums. Currently big industrial companies like Boeing, Ford and Audi use AR/VR as a way to showcase and review their designs that they have created in their traditional 3D designing software and get a more tangible feeling of their final product before the production process.
It doesn’t replace their previous processes, it augments them and it has lots of benefits to view your designed models in the real or virtual world to see how they will look like after production and even compare them to the production object to see if there are any differences, but not as much optimized to design them in AR/VR environments. Because you just can’t handle 3D environments and also design precisely. So they use these mediums as complements to each other to take advantage of both and optimize their current procedures.
The difference on the input devices
The tools designers use as their inputs in these two worlds are also very different. In 2D worlds and on flat screens, designers usually use a combination of mouse or stylus and keyboard, while in AR/VR and 3D environments, at least for now they use controllers and hand gestures.
While we can have precise control, down to a pixel, using a mouse, it is very hard to be precise while using an HTC Vive controller, for example. This is because, while a mouse only has two axes, using a Vive controller you also need be aware of depth and the third axis, which can make it significantly harder to be precise on your designs. Though AR/VR input tools get more accurate and more advanced very fast, the nature of working and designing in a 3D world won’t change and it will never be as accurate as 2D.
More is not better
Having more parameters is not always good, Have you ever seen a 3D designer designing their models in perspective mode?! No, because it is too hard and this problem has already been solved by taking away the third dimension in 3D modeling software, and 3D modelers use different angles and views of their model simultaneously to be able to model their designs precisely.
So adapting to our “natural” behaviors like designing in 3D is not always the best way to do things and that’s why we humans, usually use models to describe phenomenons and theories by reducing the parameters at hand.
Prototyping in AR/VR
Recently Silicon Valley AR/VR startups like Torch3D, MomentXR and wiARframe are working to make users be able to design in 3D. However these are only as a prototyping tool because, as we discussed, it is not efficient and precise to do the whole design and creation process in these 3D mediums.
Recently I had Paul Reynolds, CEO of Torch3D at my AllThingsXR podcast program. Though Paul was previously one of the senior directors of Magic Leap and has a reputation as “the man who has come from the future” because he has seen all the technology in the development in Magic Leap, he also believes these 3D mediums are only as good as prototyping tools and to do the production design works we need to be much more precise by taking away some some parameters out of our equations and work in 2D views.
So what will happen in the age of spatial computing and AR/VR ? Will we ditch our 2D screens and change our workspace to something like Iron Man’s garage and use holograms and 3d content all the time? The answer is “No!” In this new era we will neither stay completely with our flat 2d screens nor completely shift to holograms and 3D workspaces. We will use these mediums complementary to each other and we will use a combination of both to use the best of each to our benefits.