Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Watching Refik Anadol’s work — what he calls “machine hallucinations” — can make you feel a little like your mind is melting. Anadol creates stories using curated generative adversarial networks, a process he believes is inventing a new type of AI-driven cinema that displays a “community collective consciousness.”
“How it works is [that] we can roughly see the commonality of consciousness, or commonality of the memory inside the latent space, and I personally fly as … a director or director of photography … and define points of interest that are narrative and allow me to make much more purposeful decisions and use AI to tell a story,” he said.
Before launching into work combining historic and modern images, Anadol honed his machine learning chops while serving as an artist in residence at Google.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Currently, Anadol and a team of 12 based in Los Angeles collect hundreds of thousands of historic images and modern day photos from publicly available sources like social media and archives to create their works. Current audio recordings from local streets are also used to bring sight and sound into what Anadol refers to as latent cinema, in which buildings recreate themselves.
The team’s most recent machine hallucination project — Latent History — opens Saturday. This piece generates imagery from a data set of 300,000 photos, including 150-year-old Stockholm city archives and colorful images taken from the same location within the past 15 years.
Another exhibit that uses similar techniques with more than 100 million images opens in New York City in September. This work will use 18 projectors and images from sources like the New York City Public Library and the Library of Congress.
To create their models, Anadol’s studio receives support from Nvidia GPUs and applies Nvidia’s StyleGANs and PgGANs.
The team uses classifiers to remove all elements of people from images in order to better see the environment and “reconstruct common memories for humanity.”
“We intentionally detach ego, so there’s no human in the photos, There are no logos, there’s pure nature, urban space, buildings, architectures, streets, the space that is exists without any human interaction,” he said.
Latent History is not Anadol’s first time creating historic art with generative adversarial networks (GANs). For the Los Angeles Philharmonic, the studio collected photos going back 100 years to depict hallucinations on the walls of the Walt Disney Concert Hall.
“In there, we let the building hallucinate its own future. We let Frank Gehry’s Disney hall look at its own memories, and we let the building dream,” he said.
Anadol also did a project called “Archive Dreaming” that depicts 1.7 million documents from a public cultural archive to create an immerse environment.
For another hallucinogenic art project, called “Melting Memories” Anadol and his team worked with big data sets.
In recent AI and art news, Google’s Magenta project produced ML-JAM, a model that challenges musicians to improvise and find new creative sounds.
Earlier this week, Google Lens began identifying the works of local artists. Google Assistant’s computer vision can already identify some popular landmarks, but giving people the ability to learn about a local statue or mural could help them feel more connected to their community.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.