Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Google today is talking for the first time about research that two of its scientists have conducted in the area of making computers generate simple sketches using artificial intelligence (AI).

The emphasis is on creation instead of analysis, putting it in line with the meme-friendly DeepDream technology and more recently the art and music generation efforts coming out of Google’s Project Magenta.

Google trained a recurrent neural network (RNN) on sketches that real people made. They’ve come from Quick, Draw!, an experimental app that directs you to make drawings of simple things like an axe or a snail and then tries to guess what you’ve made.

“We have selected 75 classes from the raw dataset to construct the quickdraw-75 dataset. Each class consists of a training set of 70K samples, in addition to 2.5K samples each for validation and test sets,” David Ha and Douglas Eck of Google Research wrote in a new paper titled “A Neural Representation of Sketch Drawings.” The researchers added noise into their system to ensure that it doesn’t simply copy the real sketches.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Other researchers have previously developed approaches for having software imitate the contents of photos, but making vector images with neural networks is not so common, Ha and Eck wrote. That said, there have been some attempts to generate Chinese characters in vector format with neural networks. Ha himself has previously done experiments in this area.

Above: Sketches of gardens and owls.

Image Credit: Paper screenshot

Key in the new work is examining what people do when they sketch, which isn’t necessarily top of mind for fully developed adults: “which direction to move, when to lift the pen up, and when to stop drawing,” Ha wrote in a blog post.

While they might not immediately lead to gains in core Google apps and services, the resulting computer sketches are fascinating to look at, and one could imagine them popping up on tote bags or smartphone wallpapers. And the underlying model could help artists produce work, and even help people learn to draw, Ha wrote.

An open source version of the model is coming, and a set of human drawings could also become publicly available, Ha and Eck wrote in the paper. Meanwhile, to advance their research, they could well incorporate a way for people to rate sketches while the system is being trained to produce more aesthetically pleasing results.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.