It’s hard to believe that the computer graphics conference Siggraph is celebrating its 46th birthday this year, but the annual event certainly doesn’t show any signs of middle age. Held this week at the Los Angeles Convention Center, Siggraph 2019 is all about the future of 2D and 3D digital worlds, attracting everyone from luminaries in pre-rendered CG to budding AR developers and VR artists.
Siggraph’s exhibition area opens today, adding to educational sessions that have been in progress since this weekend, and an “experiences” area that opened yesterday. I had the opportunity to attend the show’s official media preview and go hands-on with a bunch of this year’s most exciting innovations; here’s a photo tour of some of the best things I saw and tried.
Biggest wow moment: Il Divino – Sistine VR
There’s no shortage of sophisticated mixed reality hardware at Siggraph, but I was most impressed by a piece of software that really demonstrated VR’s educational and experiential potential. Christopher Evans, Paul Huston, Wes Bunn, and Elijah Dixson exhibited Il Divino: Michelangelo’s Sistine Ceiling in VR, an app that recreates the world-famous Sistine Chapel within the Unreal Engine, then lets you experience all of its artwork in ways that are impossible for tourists at the real site.
The app begins with a modestly impressive ground-level recreation of the Chapel. Epic’s Unreal Engine lets you see realistic marble barriers and ceramic floor tiles if you look closely, and you’ll have no trouble making out the individual paintings as you approach them, though you won’t confuse the faux wall curtains or other elements with reality. Even so, Il Divino’s developers provide an impressive audio and lightly visual guided tour through the space, making the most of an interface that’s largely about teleporting from place to place within a large box, and looking at art.
But everything changes when the app opens up access to a mechanical lifter and wooden scaffolding that elevate you to the Chapel’s ceiling. All of a sudden, you can control your up-close views of the paintings, and experience Michelangelo’s masterpiece Creation of Adam from the same perspective as the painter himself. The developers use VR — including your own fatigue after a comparatively brief session — to suggest how difficult the act of painting for hours (and months) on end must have been, while offering insights into the pacing and order of the works.
There are thousands of eye-melting VR experiences out there, and an equal number of dull “educational” ones. Il Divino succeeds because it’s hyper-realistic in a different way, using virtual reality to both simulate and go past the original experience, enabling a form of education that feels more open to personal exploration. It will be available for free later this year from SistineVR.com.
Cinema, group and individual
It wouldn’t be Siggraph without an exhibition of computer-generated movies, and the VR Theater at this year’s show is worth seeing. Here, Disney’s Bruce Wright welcomes early visitors to check out the company’s brand new short, A Kite’s Tale, an upbeat cartoon about handdrawn 2D animation and computer-rendered 3D living side-by-side.
Fifty guests at a time are welcomed into the venue to see a collection of five different real-time CG shorts developed by separate studios, including A Kite’s Tale, Faber Courtial’s impressively realistic space odyssey 2nd Step, and Baobab’s charming interactive Bonfire.
Once inside, viewers are seated in chairs with individual VR headsets, headphones, and two controllers, collectively experiencing the five shorts over a roughly one-hour session. A bank of high-end PCs sits in the center of the room, powering and synchronizing the experiences, though there’s little ongoing sense of collaboration between participants. Instead, it’s a VR theater where everyone’s watching pretty much the same thing, albeit from whatever angle a specific head is on, and — in some cases — with differences attributable to the shorts’ interactive elements.
In a smaller room elsewhere at Siggraph, New York-based Parallux is offering a more clearly shared experience. The company has developed a short story for group viewing that’s akin to watching a Broadway show with friends, but you could be watching it from anywhere.
Here, Parallux CEO Sebastian Herscher gestures towards a table surrounded by Magic Leap AR headsets, which seated viewers use to watch Mary and the Monster, a unique spin on Mary Shelley’s creation of the Frankenstein story. Strong voice acting and solid motion capture bring the animated experience to life within a diorama-like stage setting. Magic Leap wearers can use their controllers as magnifiers to zoom in on the individual actors, akin to opera glasses.
Each viewer sees the play-like performance appearing on the same table, and it’s synchronized across all of the headsets at once; it can also be watched using VR headsets, and can be scaled to fairly large local or remote audiences. This is a glimpse into what could be the future of plays, experienced holographically and from any seat in the house you prefer.
Apart from the examples above, most of the VR displays I saw at Siggraph were focused on individual experiences. One interesting exhibit, MIT Media Lab’s Deep Reality, used live heart rate, electrodermal activity, and brain activity monitoring to create an intensely personal relaxation and reflection experience. After someone lies down and dons a VR headset, Deep Reality uses “almost imperceptible light flickering, sound pulsations, and slow movements of underwater 3D creatures [to] reflect the internal state of the viewer.” Who wouldn’t love to kick back and relax to something so personally attuned at home?
Next-generation AR eyewear
Two of Siggraph’s most notable hardware exhibits were Nvidia’s new prescription AR eyewear and foveated AR headset — both still in research stages, but available to test with prototypes. The prescription AR glasses offered a vision-corrected, see-through AR display solution, including a demo of how the lenses let viewers see optically sharp projections that appear to float within the real world.
In the prototype form, the glasses had small, clear ribbons that displayed projected virtual images such as colored bottles or an Nvidia logo in front of the lenses. They didn’t require cables, and were as lightweight as modern, inexpensive plastic glasses are today.
A separate demo showed off Nvidia’s work on a Foveated AR Display, which the company suggests will use gaze tracking to enable multi-layer depth in AR images. In the image below, you can see how a specific small gaze area tracked by the headset becomes sharper to your eye as the background becomes softer and less detailed.
Nvidia is touting the Foveated AR Display as a “dynamic fusion of foveal and peripheral display,” and releasing a research paper to accompany the project. It’s unclear when the technology will actually appear in a shipping product, but it’s interesting to see Nvidia diving deeper into the AR world at this stage.
Next-generation haptics and immersion
Some of the other innovations at Siggraph are wild, if not crazy. For instance, Taipei Tech is showing off LiquidMask, a briefcase-sized face haptic solution that lets your face feel hot and cold liquid sensations in VR.
LiquidMask can deliver feedback and temperatures between 68 and 97 degrees Fahrenheit, potentially useful for underwater VR experiences — assuming, of course, that you’re willing to hook yourself up to something as large as this to experience those sensations.
Another company was taking steps towards a very different type of future with a gigantic prosthetic tail — something that one wouldn’t have expected to find at Siggraph. The tail can be used to augment someone’s existing sense of balance with a third stabilizing limb, or disrupt their balance for exercise or other purposes.
The prototype tail uses pneumatics, relying on a separate cabled air tank for motion, so there’s no need to worry about an imminent attack by The Lizard or Doctor Octopus. If it can be made portable (and quiet), it might wind up being useful for people with physical disabilities or motor limitations.
More small steps for Magic Leap
Magic Leap is offering two main demos at Siggraph’s “experience” area. Long lines were forming to try Mica, a demo of the company’s AI assistant, which presently can’t do much. Mica looks like a pixie-haired human woman, and at some point, will supposedly be able to speak with and guide headset wearers.
In the demo, you can look at her as she looks back at you, then silently follow her gestures to make an artistic collage together. It’s not particularly exciting stuff at this stage, but in a world where digital assistants such as Siri can spend years delivering hit-and-miss experiences, Magic Leap may well beat Apple to delivering a more compelling, fully-formed alternative.
Magic Leap’s other new demo, Undersea, lets users interact with a nearly photorealistic coral reef that appears within any room you choose, and a picture-sized portal window into the ocean on the wall. In addition to letting you walk around and view a piece of coral and small collection of fish, the demo lets you hold out your hand to generate bubbles and hold a fish in your palm, albeit with so-so tracking.
While the Siggraph demo is designed for a two-minute experience (and isn’t especially compelling), a full version of Undersea with more settings and depth has just been released for Magic Leap users. Regardless of how many or few of the $2,300 Magic Leap headsets have been sold, it’s clear at Siggraph that the company is working to actively push the platform forward.
Best of the rest
One of Siggraph’s greatest strengths is the diversity of computer-generated art it brings into focus for attendees. You might not love all of it, but even some of the most basic concepts are thought-provoking.
John Wong’s RuShi interactive art exhibit above uses your birthdate and birth hour to generate, through some unspecified mechanism, a moving and colorful AI-based data flow that is presented on the central screen while prior users’ data appears on adjacent screens. It’s supposed to make you consider the amount of data about you that’s already being processed by AI in the real world, and whether that processing has any value.
A Siggraph-wide new focus on Adaptive Technology includes multiple Microsoft adaptive controllers, a touchscreen presentation of different adaptive technologies, and 11 sessions/talks on the subject.
Last but not least, David Shorey’s booth demonstrated the use of 3D printers to create real-world physical clothes that looked like they were straight out of video games and fantasy settings, including dragon scale-like fabrics that could be used for cosplay. His techniques yielded an incredible collection of different textures, surface treatments, and end products that look set to merge the worlds of CG and real-world fashion.
The future’s already here
My biggest takeaway from Siggraph 2019 is that the CG future some of us were expecting a decade or more ago is already here — if you know where to look. VR and AR aren’t ubiquitous at this point, but it’s obvious from this show that there are lots of smart people working to evolve CG from its early 2D roots into genuinely immersive, interactive 3D.
Attendees could spend nearly a week at Siggraph without fully grasping everything that’s underway with huge companies such as Disney and tiny groups of researchers across the world. Scenes like the one below, where a group of people are all sharing a computer-generated entertainment experience in VR, have become table stakes for VR as of 2019.
The question is “where does it go from here,” and there’s not just one good answer. If anything, Siggraph shows how many directions CG is heading in, and the reason is simple: Hugely talented and creative people are now heavily invested in the futures of these technologies. At this point, the challenge is to polish and spread their ideas to as many people as possible, bringing what’s currently in the Los Angeles Convention Center out to everyone’s homes and public spaces.