Artificial intelligence (AI) and art are less diametrically opposed than you might think. Already, in fact, autonomous systems are working in lockstep with artists to generate holiday songs, canvases auctioned at Christie’s, and craft colorful logos. And now, a software developer has harnessed AI’s generative powers to manipulate contrast, color, and other attributes in images.
Holly Grimm, a graduate of OpenAI’s Scholar program, describes her work in a preprint paper published on Arxiv.org (“Training on Art Composition Attributes to Influence CycleGAN Art Generation“).
The foundation of Grimm’s AI model is a generative adversarial network (GAN), a two-part neural net consisting of a data-producing generator and a discriminator — the latter of which attempts to distinguish between the generator’s synthetic samples and real-world samples. Grimm selected CycleGAN, a relatively recently demonstrated approach to learning transformations between two image distributions, as her architecture of choice.
“CycleGAN’s image-to-image translation takes one of set of images and tries to make it look like another set of images,” Grimm explains in a blog post. “The training data is unpaired, meaning there doesn’t need to be an exact one-to-one match between images in the dataset. This [GA] has been used … to make horses look like zebras and apples look like oranges.”
To craft her model, Grimm fed a ResNet50 algorithm trained on the open source ImageNet database, and combined it with a CycleGAN algorithm trained on 500 images from visual art encyclopedia WikiArt’s “apple2orange” dataset. The resulting system, which she dubbed “Art Composition Attributes Network,” or ACAN, learned to produce photos while varying eight different compositional attributes: texture, shape, size, color, contrast, repetition, primary color, and color harmony.
In tests, ACAN managed to successfully translate images with primarily orange colors to new ones with complementary colors blue and cyan, and from other images abstracted form, color, and texture. In some generated samples, objects in the reconstructed photos bore little resemblance to those in the source images — the result of tweaks made to contrast, size, and shape.
“Even with a small sample size of 500 images, the CycleGAN with help from the ACAN appears to have been able to distinguish between eight art compositional attributes,” Grimm wrote.
She leaves to future work techniques like attribute activation mapping, which uses a heat map to highlight elements of the images and reveal what the network “sees” for each attribute, and color harmony embeddings, which might help the neural net to learn associations between colors on the color wheel.
OpenAI’s Scholars program, which graduated its first class in September, is open to “people from groups underrepresented in the field,” the organization says. OpenAI, which is based in San Francisco and backed by Elon Musk, Reid Hoffman, and Peter Thiel, among other tech luminaries, plans to release a case study on the first cohort in upcoming months to “help other[s] roll out similar initiatives at their own companies.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more