Generating logos from whole cloth isn’t exactly novel — startups like Logojoy employ AI to create thousands of banners and branding elements on demand. But in a new paper published on the preprint server Arxiv.org, scientists at Maastricht University in the Netherlands propose an AI system that’s able to synthesize logos at much greater resolution and detail than before.
It builds on LoGAN, the team’s previous logo-crafting machine learning system, which they detailed in a study published last October. Unlike the new and improved algorithm, LoGAN could only create new designs if provided one of a dozen color keywords.
“With most Americans exposed to 4,000 to 20,000 advertisements a day, companies are paying ever-increasing attention to their branding. This puts pressure on designers to come up with aesthetic yet innovative and unique designs in an attempt to set their designs apart from the masses,” wrote the coauthors. “[AI] could assist designers by either providing them with inspiration or by reducing the number of design iterations undergone with clients.”
This latest attempt is a generative adversarial network (GAN), a two-part neural network consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples, stabilized with low-resolution images and fed higher-resolution layers as training progressed. Because the low-resolution images contained less detail, the researchers say, the system was able to learn large-scale patterns quickly and pivot from coarse to progressively more fine detail as image resolution increased.
To compile the training data set, the coauthors first prepped samples from the aptly named Large Logo Data set, a corpus containing over 120,000 unique 400-pixel logos scraped from Twitter. They eliminated every text-based logo, leaving 40,000 logos in total, which they supplemented with 15,000 additional “logo-like” images from Google pertaining to nature, technology, illustrated characters, and other such topics. Then, they used Google’s Cloud Vision service to generate four to eight word labels describing the logos’ contents, which they vectorized using a pretrained AI model to create spatial representations for each example. The spatial representations were next clustered to identify those with similar visual characteristics.
In the course of three experiments, the researchers report that their model generated stable logos of “consistently high quality.” Some were more simplistic than others in shape, design, or color scheme, but the team asserts that the diversity of the outputs indicates the model can learn high-level training data distribution features.
Leveraging the power of AI to produce artwork isn’t a new idea, it’s worth noting. Botnik Studios, a graduate of Amazon’s Alexa Accelerator program, recently taught a neural network to write a satirical Coachella poster with a list of fictional band names. Prisma, a popular smartphone app, uses a machine learning technique known as style transfer to make photographs appear as though they’ve been executed in paint. And game design AI startup Promethean AI automates the process of building out virtual landscapes and interiors.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more